var/home/core/zuul-output/0000755000175000017500000000000015144611130014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144622061015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000311034515144621677020272 0ustar corecore#ikubelet.lognc9r~DYA6ZF,-K$l"mklkcQӖHSd^(_u6b}Wߟ/nm͊wqɻlOxN_ ~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vt'oI¢w}o٬owko%gQ(%t#NL֜ eh&Ƨ,RH 4*,!SD 1Ed_wkxdL3F;/u7Taq3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN#ap6dQX0>HTG5QOuxMe 1׶/5άRIo"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/X_]F@?qr7@sON_}ۿ릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg'%hJKZ|Q;|m쇲=T u)1 QLLj`K -D,(7N*,< JDA?VǞ©H\@mϛ~W-ce{0d8 X]Տ޻(*exBaEW :bT:>%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&ޙ-did˥]5]5᪩QJlyIPEQ"ZȰ<'Y]Q4`Iz_*I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽYcX Jٯʞ钋*[.\MA],8w87yPyو$m뫷3`-s9zz[ɤV)v} {qԐ{M;4VDsc#֚ =әIu-]טk&%o-w7PY"!5<@&] WE\wLc%=л,ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]ŌvOw!|=9k-{p?}}wS`ýͽn}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #GukMmQ9@lm]Tϯ1ba.XW +x6ܠ9[v35H;-]Um4{mMrW-k~fؤ^ϋu_j*Vj^qM `-Pk.@5=X#|ۡb1lKcj$׋bKv[~" jS4HOkeF3LPii?q o* CƂ lu" yo6"3껝Q~flQ~CBX`]ڦÞhkX _-Qy2(?T3ͤEZ긊mۘ$XD. bͮW`Aީ}lСw5/lbm[N*t*@56."D/ {Dۥ SLxZn$N(lYiV =?_e^0)?]{ @| 6+#P}^w'̅dk  C 7fbU{3Se[<] XJC5B~65d+, Z`ͲH-nမVbPFtOfD䓗]c9Rw+ea~~{;Vm >|WAޭi`@bIEOJLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1P!:5*6@ q\\Y7Tk sKpww0SZ2, u}vao=\Sl Uݚu@$Pup՗з҃TXskwqRtIڢLhw K5C\-&W-qQ4Mv8pS俺k}ߤ`ZnTV*P,rq<-mK[[ߢm۽ȑt^, tJbظ&'JE%"2.*""]8yܑ4> >X1 smD) ̉TީXi߃ʟ~㍖›f!OI1R~-6͘!?/Vvo|4~6I@GNݖ-m[d<.l9fbn,GeO2sٟ+^Wzw A<4 }Ӌ]U5g]c!I|pFEQG9$Z'&\tw$ڨAM-ݏ87I,[[(=.+>n` *UP0Sp8:>m(Zx ,c|!0=0{ P*<7ޭ{T|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnC.^xt4gD638L"!}LpInTeD_1[rbkI%8xPU:LN=TPOlI&N:o&2BVb+]~Nc9'\~ѻ|kyw*nD4qL~`|%4Q0q["< HK'f dt(dZoQ%_}~Yky7}SWekk̗E\OZel9#폾o|Qkwot`03L˦D:LQ..!S55܉֏`E4&ZcOҶ_q՗5$BἋeY(|*烎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'g^s7`Rzu-anOIq;6z( rx߅ eu5܉Q~O_y琇HBzIw*-ϯi* VšPȰһ8hBőa^mX%SHR Fp)$Jsɛ7^i58Wf4PmB8 Y{qeφvk73:1@ƛ{f8IGv*1yx27M=>+VnG;\</~?aX9QR5'=K)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4z%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKCtyv#GL`,Oȃ1F\$' )䉳yg=#6c+#  =J`xV,)ޖ,3~JPͪm|$oV1yU<̐t6 T m^ [IgINJ\Оf*Z"I)+>n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁV_wm#rߏn$3Ɖ7I;f)d3|")Y~e+VfIϩbuUR-l(AJdɆWE376OQb_Dγi fP0>[D-<}5RDI;,ʤ7wql}5J\(R[-aԬ ן]YsQzm<*ܧ}5[+ޚA "yAQ&io-Wߍh,\B+RVr6yn U"#cWc_2V`oA,LK<#!G#9'vK)S|Yກ'd) lj"Ξ'G.xֆ6"Cst?쐛2]c9wdb25]^jy.A6iB<_Cxls}zpn9cK7|V̓:e׊«X{`ӄ޴2>ۦkZYo%,KD='OZ9cV6ګ@Zv[…()p-oՅ碓w*rk?!?ZXn{,ܷA>Ţ?`ad&FD$2UunJݡw (-TSkߙ'e ˡ֊RvW>1b\o96ae6bٺiYFmo8̱Px17k'T,d3)_Qvc8d*Η[-:O"eH%5UJI-ɼ,YQK _l'y.Һ:0{HVi%o MQ!0("EX-̒jTe`X娚&tQ=Uu1~TzE2 1uQܲHjRꢚc[R\*u2e9`,FIW-0eh VdkZT}kJOs3ce42koU2Gx_`x%IST`)295Q C.:3KGS:/X =PH~EE_P5;O26+;[GV)+Cn2)hbyʫQƿG/Hȣ/>n}oۦ#?uGS4A;d8q %<0`H#奊YZɥqu&T_st_nKB. | #2Zu83Ov1oqt?lZQH57>M~LalH4p`unI*7T6Q)"S #G/gG=1_ok*gɴ [YMx<׷6x-^[3x[5m5#pO]E"o*cFd9@&C5#z?Xn޾>CٚƟD_2`QQHXC/eyqQ4&2˦@'2c{W!,D?Ƚ?y QLxXxkǯOFh'IUg=Wg$2Mvt!6(ߐAE>45˓vΚR{8%q۾I4Ix5SMΎߔ Dg3"՛#oiPFвą SG*FeQ;Y~x8B܂f)""0))ۺI0BsxOi9J{zH>6 PEq.I#a%mxa)pӟkۤSf,ݸ1@&y?&$UgsEnշ,M"avSUt'%{=fW,+RR `߁w1.f͍8nc!3t*3ǕRH9ivD C'G*YZwg.-pD>1HQʦ8YQdM}&_ K%Z9 XOr0D.? Aш-B4k3$yM`ۄz@[5F{b9}ϥNS>yFj{,< Kó)N .}P"/bZphK?KLA1X՜(1c'`\}Ċn #!T0\S [޼a*vQwI2T'86!8Cϣn f-3;jaũ/0}pގ9.Z0mpx.F6 %|<~sD㰽n$gu)Xvᥙ'2$2 t=:68k˩-Q`!Fi{r么e-j{B;Y~Nʺai#˝͈ei&e ѻx94E$^4.E5ybIy_iȽl* `$-}eLq PUA5tZeUcίa,FA|x ]̗hj5T&:YՍѐɅG$oK`j9rʵY2i ۙL!o:wyu(nPDƧ֫`"bjj"&#@`Oj׵a2ϣ]Y` 37EOmy)|`}4ITm.}{Uxi$IuDMUr㹪@pV0 -d?pTemF^E@4Ֆ`ZSC+S a㴩f뱀C LkP\$x8I[ S+pX}(ڱȵ7D\fS"UrEx\ STXnڶ"\\~wFaS}@*ZRJkj,B@WC0Ue3cy֚2z sEGtnLR- 6XhV BjݶFzm5nnZXX*k,w֌?֌X1M}RZT{ &IFWj?6LwlGq7mp_(e*ƷVOxkcUDVItXθ$DU{N2ڑbnzhIn)Ki ?8 h y9U!ۣ5lд6Ǻp[o5c㾴,]a,cY~vhXz`!kX<,y,u7h!JPU41Ґ:@Z>٪EH٬$ijCRdX[^ƍ`w܄ZUoda!Mbmic[b!o)JQWIC⮵xx'YGۈZ2 Ϸ6JD@ hXn܀LR_ޡGhEgh?;DU-*H.u .b<v97 wR=_U']3qiwmD"fDڨ3|!zQ풯2+ zw fl'ECձGwoY3]i[ `FAV%qoA|K$>@R.Tt4h|t7 aT{n!1'Y`? Gִ+ Bf stC*]=өO"WߨU7.^ì㺆u!ߡ QD&UCO}־qSM෸ 益:S,K"j62"HUΊ\i}sE,k9 D>,d*OjЃ+x1;hU$+lkq& A`ZLBuTiVYsm۸ %qT_ϭ 346^P|<OݰHNu+F#W!9:z@|iH۵6~tv$v  >œH%ص >a. 5@n*Mʮm\<{7MF*nA*d X%z܅ɸ;[xnE:.4"w@2/ؚx[sgR{`ػ֯I;r9p! >dr`HQRcn [;J Ki$8PSO0t{wB#Ggi\OAh{L֫H0'Ev[7pwob4lcD#Dkl`OIlM 0D,GpFF.B1!E.7^tۍ@wGCf7;Jf,7NZpP;SPه (=\1LBڝ)܍H^12 뢃Ep JmmFH& 䟺0C:ICX2P!^2yoX@Pwg]D=C}HJ"D k_dy7JS~_kmf !+m.WG6"%9Qז|l.~3$Ӊ:Vq#%>~3C÷hFI-MuY7J+@(@k&'?~[?qc]E \C\:YNaˮ"h^cLzfϤWMFg* #N!˓, }pW JM)TkUuُUZk6M@'Ql.xRU{KEIt UӨEp_JT"Xhډ"_:th*\9F<"o߿=ܷ?'9zcpBjp] _v X No"^X#K*M,=20^};f( ]7Cak>{~])0'So{~8/ "\/:܎֑bA8YG&WWt֮nա9 !\a`P/^&臣@1[uj"(Y0 8l0pާ+vԗ0bpR{@%`Ab^£}GPv >K:z XA!o u P )c77N&xzзn#gcpә04:x}J .s:M P~YAS|NE"օcxCw(]="${ ڏ :'R w 0p0Jxvw0xW/{2+Ĭ Aohţ.'.~D.۝J ӡSPo׉/o@>(/JcD/53H3ߗ[Y /,S땫` is<Ύ.A}דv_'sIkҧ@tc ݺ \IL@דX!=ȝ0(] } Z@z6`tE)U8#]`Άqw1Pt`^_ @y)7C^uPD_Scpm:ZpnCB߫ icFҪDO-7KOp'/=ӋyE |7]9 GԂ%@b瞀I#W֗Y! ྪL[(Mi.M2i0c29 k;/W(5},oο:_;PX F‘"EWoIp:Hcؙ$2hE#E4˲D8X,N $VxE`%]!3H^h0x^E DfkahK&Jq=}Io V%iLa\աP46`"X6w^ zF->67Y'j9#\@NF9q6†pMNk3Q>iih"ݷ ƲPEBTKkYLm!0 })d!~EJL `zt+qtmm/a|GK,A Z8<="Wn!:0uiδ;M n~+9I"&$ (qYUWRVuS-ҙCAvtJipAL<h騤cI b& H <8)spW$.Cŀض*P6~A^aؠ9A2cyd?dl3Ggeפ58[p+䗳vu%lBߋ2,74>`1O*24%`%fꆜon7Z޽Vx{X@ÃLj#X^SWw6 ֛wpht]XŒ{iȨ[vmŠP_MIj髹7cS YEfusx4OUxyc%M۴~R8/.3 mq:+|:)9 Mn$q|&~9>'ExV`>ҷ⤸F4  Zn~q. 2qUo`MS~c:=GD^L.^86g^2hI ?K4§taLle}b L[hW) VS[ dK՗M,s2Myc'ĈI67.WI$aJ'A( 2 GS nUuC`X|Ug^^PXuA?Zul6<-9!wCީ tc4 /t'Vm€?R{qIpÆpż HQzkb'ڢN>\86mlFvX;1|^ӐBˠ]\#ꦢxsHr }cҞ φTTl6_`[B:t BJH(݆;CJSAUa[݁jd2'6֩e[˶=TDJ::[lO;|hеlNs>s9yeyZ0?F  Y|UeޟNa\g&P xpr>9G=8c|b2%jrMU%8EFGXj]@-C[ڧ$4'eVeվSDK/V4 20/(^E{7xηB pZ{VM(IW#u0Yba~i\ |kA [H HU_,y[BYIQ)C틇+E"ޭnwX!//f3/_~Wkgn&/bĿĝ 겘bZ2} UaBCM xsZU.M-# a]Ř82Sx@ggiOlcOC;@1m!}&Lվ)ScT) [^x^C}^&ƪd]\egSiajQW=Pw ҃YHN9eq;rvaod{{U_Zj}d󵈪EݹYE_"挊"AdH1{>:cJ%eYZ + 9Ỵ}UrPJC6i +!P^d+NJӾuE"Q>ߎV #˘tPUY"=Mmg')n%uDqux`_iZ@qEc^DR!,G3Fy y%3d”O$ ZFƒ9Mso/N֯#.-[g +^/*ìS-^۸&Ϥ .@8  k@=ޅ1-mNr([kE1pMd2idlACTC6]|aUDʋ߲_XS@)ާnb]/EkD#>\0@UVɹo/&砏O1_NцO q:oj:KFڛ7 6kOH7M kq <[!4 4 0i`,? F&eY:j4"+cx|'õuD2o-kx\$JG 3%HeC^"2~ ĥe=H[Jꁼ!8Ȕuy3//o.Oɒs'fX||?WIv[l׳m?g{.'aӲk4bF})bc@Me9͛Ϧ1&Қ,dLؼʄVA܃VRa:vR`"HqQIK5il*.fHT:Ke d{MC.k"uxz4x߁@[ztv&M|# ilqbd)O,XYtV8M!>M9z{B;[|G#ULo}'z޾n6E xPQ'.!u ٘|v`8Ue N1%x˿!C//ο_7寠.}tD;j^ڳo^**'gVaedxWn7Wǚ-gZ o*o]e?eV{8ٻNRS[}oJOL7u {Nj]J;YvA3]g!8T2Gf?*>Hby f'Ӡ;-J",0 MfL71ëd#g*=BC Ѥb2u`XUXkP}{+F-)03Q"haUeyv[5گ1H̴}Q yaA[y 3 z8f5IH =.8[7ǐE0}`:Ɂ;gr&ÈaOq.(Ucpuf%-qPad (FH1HlrZm;N7CTܧkd 6cDtϤ$g=Ag/ ƈauwߧt[X5[Z7yϝȒF{S<81u2;I,?GـXysV_{24c#)j'Eh\ Fҙ= گG;89mSPQEf]!VElA,C~bR6\Y NerK:D[aĐt5$;jX  ^p\f0!+¾kC9 2 V)x`ɤ42e gz+67ͷɪf-bg89ZCpZFI.#}ޭ^%Xs4>,%Bc"%&3C3C"P[6N7AYXH})0XߛB[᯹IWWM"w\Єѡ5YiSrKW%r+2-ytDHs,TiAVL܋6ki:lcUIDV=<6RC OBKblbФ2'.*#} r93npwo+NΰgA6O]+zZ?uNo+})3kGR5֒#'=:=@xLzt@s?1ԲWjPx%ΡR" zI5d"xfzZ-{oME]OtSݵGr*xޏ z/HޘN+MI"İEE3}<ݘC>ȅMEQbn`cleka$x0-J8m/0%Bc⺖|l7 w H,dm҇VRvH0oa&\\N:BGU#IpT|Q8K7]M}WdRE3sUEZp)`i&5i4.bHUXhli0 \!˜ñ'(cMr?'9Ħo)1ʭ.4Δx1X"8%[:YH:3+zG\ J'@}Y47E%rT 9DyM-WpW׏C_iG(1RLϡ`(ńUhD  FA|vkOVE* M08͒6ӧ2:dW Fۯn65]w¸n֬6evѳ!I-q:B҈umhb{3LgЃǣ/ޜ"*sF[^HGa}¡<W.%-pDm B(2; h|-K)^r.CH"%^ؓCُIf4KPF+S6Jyf;\Pdc3\I-W89zR{Mƺ{x<]׺;7s̑ rQl͜Fr-Cy`Ãb5K6&mQ26ߊt{oHprlN88 8"qeoɛ^|g!s% D;ӷ̈蘗Gy}RR!$Gkڋ6eݲgiXLA=Kcb-D qYO+GŢ8T}߮랼ZYmF;r̺1\"oM# e`qXڀr(N TrK1<}n0楃ͳ/`YRGܐdsQT&A3"؁߄u6;C?Γ3CyG¥k ;;K@d͛quD^c':]=& =jǸw"NI}Rb-#;%0wBϥj31jq@s />oCivn{Ncc<|HpLqiq6I2BqFXU2 3 hQ:c} RG-XxyM^$ˌ3Bxؔ>U^# w'X ܽ'.[`5vfO0zLuՊ\+{^0G9J=!<^t1/:`>?_AU^Ŭ#Rl sfuG{ݶ$8ޔr *)*s,Y+)x--b$xI*6`b僇,QΠ`_RL_,O$yC9zÅGoAjF)LrLϔ I=[Nc cu[R#csp9~Д4O#e-j=7׬2ׇ=z!Y1kUV|4I\#F'cUIDV}`fĬ UD% htB=GrD\tK~iߘeDlf1;cYaDfxiu6>I͊&$F/hLpa\kԏhjJX |* T1̪QnXuf?oNq!$3ܦ;I׼c#QEg0u8>i g.8:r jGgfL GJG퓿~5uKۀOء:)2ZeȡUJT\dcCqxooIpRʣcw\-vJ\m_KT9?tcm+cNE OzO8SN2v;uLʥySS*M02@j> \@#mٰ(#;hoFnB.EwKs $Cbgwwm# 6m-!Z]gi+mG``}/oHzgt H*_k6EJm;r+I$Uc_} ZOLq9NT-k^JI:ڗG=PkPdޏ @aixije~I>8KEp&k0{Kb&c X* D]_$Cl8wσ)o9較Oa, +s|8l$0(/8ӱo&nmq{r|DZ"k!D dc 1yxˬAg0f׆o_ Bg;N8~YzkNCxOrb?AG{.F"Q+c):d,cؒ~:{ H"iPr^xTI4&iU)D3j).%F1SO<J4Eq( -M7͚֨flhj ?1: 5 { %Irn.<&~@uozs05 .- FҞM{7Y!j!ƱDun8w!Q[x#o؀+z+A/jH$R~>8b?3ˈ@$3]Q//&_/{(d2*L|$0 hPW}+S_|# tl t%;JÈ)w^X"ѭ5287p\5ĞJ|h?6 {+RVŪǁEi/~1貲 }8}ȇI<5W8377.qfbJ4C `;lz#tf~] q:rD<1}QRk3sBE܅0o+3oo U<>uS3po2(4!{qet9^nf fe쿕W3AgŒ{y_~YFq:[CH8U;"[8_cN>N0~ɷU]f)uTY ˟v׽+ʯ{%[9 [9 YzHKlӑH$QL$tg-t~u{;_:2#ob3r|]#YjT *?@cmBU-Fn5O^XmD<vN5mo 1mЭtxEAPچN=FX38oj*o!X[HFt{Y6+f4 kWVXlSNRַ&z/R2ZU̽֠-VK<3^{pJ[Sڲ6E֠-f͓`Zi| XAiM3/&N)SY 12׎tjOxplFE}q2؅tB^m?eqZ@`Aek\@& UWKfm-2(rQNR:ZmJ7'dzIO::>>h\W '[(a3 .P?ݿq]^7sxWz|&L >vo] S3 <ý:,!(Rb̾,nq-Vy|ZVf(1&zTmwIKq%m zZkZ gnkn1 ?ڽنEXctF`jD$fs/E\*N<>.\5K ke+LcvK#|7EZn;ZoX SLV7e뎦"9f , 75eAs ˸j\}㸱+7R=ia@yi({(</; 򥩁BrNADqЈD[C^bV8@Rk xUUW@AYiЂa{QYwp3@R-k]' 3 /wGشeRitd{PU 橀|] q_:}k.*<2`ܚl "xZ^|zq+|uPr+s wV9FOsxaML;$} z M! ~hZi p{.}[vo j {KD\kҎcn$Zw\6 |[KYm]-\hEڻCAo[6-j 'AWˑ$(b8o_NE\@Vr Zq],GPگ1ԀJos= Dӫ[g*{33:T´FkDQT縭!&C+y8o}(#hpiBi9@BZ] FPA/m:{Җ2_q&inΑ?|k6., M%Єp(6l+B/, MSy %AՀ!)B уuSO{(֢6[ÏEYw%?>4OMfD=un?(,F4dCOB0 ~Sد^O+[V!Z5oO&;+p[2AZE"ϟy|E=5!➹z}t5a,}^boI 2; Qi$E~k$xؿzfF ]'B!n@Oh=9X,*x)BY5![G82cTHw&xo{긖2]ik?\Tjv؂/*G˳"H+XbbgX*2(c,c85Aي! 5f8g>9 5y}Dj/F|"zkty-q)OKב0̟۸yc \s+љXd㩕 o^F # T %52z$pl0PALY} 3kHq3RPEc JcG(!RC X*.j(յ뛐dG%O&Φk9|SLFze2M`OSg/١aځE&f°[O )V^*~:Sg֑V߶tŢoL:Qءc=2S:MS&U"K}j;1|CM[5Se5F)p)J}#t^ԢRS: Y'Tb#\Ƙa " #ތS_cP4,_c$[0| -m\hm2Z%!fSM#Flf|a,Jm\SMEw0f|f~n7qMn Os+ )ń3IR@ BoC{T?f߫/;hޕv_}~` y7{n M17'lwlwa׉wwUS1TWldZA)|8쏆[ݵP` T3.&? Gـi}G0(Ӥ75%Iv~4(0d;qwqvW5/7N=3.~ޏ+{յ+YV}:%._x-bEPF2+v 2ixQ)al;e3. a&{cw/ ^uoe f owssY^1xkh^)-B,VY.YFI2JSy#> 9Kk0QLsp";/%3a㐰L97RdfД2/4 hP`Lez#1[²a.Qᙦɒe))mcB?E(!lnn I&A1mmd%%ngH=([IQ6Ů[ (3g6xɲfp5#Y-ALɿ[ł!.0vS0=L $:%HE2N0|o6Z kh|"=hdʂyNRI0QM𿥂 p"6 cn18'Tbr2w#i,ٗFsE۸}m\=v :.1uYBno*%bN☣\+7bbDॾs)&>=|0 \as[bmd stT)1Q6RԊvç%*bZq5тxI g 0ojm+%l'8f-~2$=D,%U,#_Wa-aQ- &:pF[YB?*V,,!&%"0XGd*&b4 !~!j !dviB;Pv\ ģ?dݟTܧZG]1?:m@f {|.} U=Z8.ZX!~ՄAԌ?_}$[)kk-NL9jِQ; J`R*MsHX#b_q ˍV0e]qӂhܷpܱPԈ|i_V`ڎVYL@ m p筺%R&h=_(QNtlV3d-&1$1g&4V±C#!⸍3$sLI`~LU8Ȗh_fi  Q<%=/*^[ ½Uuޅ\Ic$W kX7@d5ZBG-뗴)z q󮪉V N8D ~fBɽtSNv鞿 ?A9)9c1U84b 5DD;"#fC%hJ!h-" lL4%qEpb{!FcEWɔF0itab/iXҎfp5 hFUqn 9`p", |[ۇcڃQ =${pqa/fLG݆"ɛ)}2%)pI uMgT8? Vcƌ 3?RlJ4>8|(%M} &G)-eќ$G!R.#x(Fnh$Q"tYkJ9(74!YQ DDꬶIvj m#;EDN4}N5 Z!ճzniF%"I"" d "۶1GDb NS%U 0ͧ;,=j=ϝs89 G  +=yiLs'i=\ôZhv&E88&uO5FҾ5C =x#)H5-2MIa50%R[0f[6˻T*J9)Lvy 09>g!p&9RPLQp9R`J~q6k f@kCDaVDOfXN̵&!~ 8+u-@dQ}T TGƾ L)OnX#ޱlhIAIn^-T24ntFee:ܠѽ8PRWb[3r1ydb1cD O;2 ?ѷv>ѫvten>Xh:eV?W.˟Z(iFi|qшK+lԅ~ʎxƯEQ}Clx= X߻|1I\Ng4H3Zx1-paAO7Uڑ]lS#tvG8(n REZ-" JN8ۈ&0 sk"ұ5y_%;юxE pB$}Ʋ\; zgHpk"@iXGxJFy2 p]MNy tӲ b|qqaBe; ^=R??{>{}bߤsz8w2## I?+XnMw4HKґmG [6̻^TiWv5AbG|YtBký:ZnN2-wQ{i}V„0w5}jEF҃x&=b{V>XS<O}0RMCBi3 (dMo|O8Bҙ!ȋk>aU5VC9TZ^ * o1x}K5~ny2/N) 5ٿ_'Z)7x>.wW\}a}>(\7Y4c7z|.fή/Ey%}"' V?i4\??Y`?&Leڑ~o3Z{A-z=ݷf\ 9Q;}4J#GĊp"蘳]nDVSR2TBX.C7}GwQ5*(hSXnRm-'MIF]M`bm'^X>Nc1ċ#6؝@Kpͭ6𥳐N/ |3p0J.5~;OZvTY&ctVV[V׬x< -5CQGj#g^n6y2 mvǰ>Pv׍K|~C]1"7F͉l䈓b6$qoXuǽ\Xlڱhv55( obs(5EvA f3N2,&O6l, 5ƞeX"{dc0e762bc&=8O{kO|9RO{OhO<)=Jh!$I|LPc_UEH\s30[&nC fE"x3mt :+RnAf 0!(!KE5QE>ib _U1WxBy^цkGşc^r rdc J-ye/bAzskhBcBS'?ߚS?L,o/]qNyb ~"̡M,$ɳ9 )0SDEl\0/΃3fMz ˳V1[_}fd,l5%@ /!Vqק)Hm\hחSnA)kJIN x`C.M3vUQb[X ?kO`ᡄmnbWz7`޼&Rߛe_sY7P|LW,in h\rjfO&gT=O$ZE7,M+;83pg<[2{{wwf~iRL{2s IuWNYb^-$5$ݿdK6o,S ~/+A|n'WzeC+|Avc.c 9xKnJDɒ[Jn&n}_ty^\Q|Z'p"RגTc /an< G+ȇmŇC<[иl%a7&rp@_)414.9*h mq9\?z5tYPEZ&ܙ4,y:ugեr8`&#˸l|c9S|~t/a fr4|dgnkbw''ΫU_ÕD&si桙,d?77/\EDj d͓ \Y'ӂL4,?V\0xdI_ L0s߂tn% AyXj #ie#`0 X|"vXfƟ%|Z+%4-}Q,A7 +`u_妷%v"1wM-շA)>ifi/߾Y@eПQK9G&~ph57K YG۹Z'iW RJ-Bm<>M[˜lŦGb = n1h!0$+m-:( я!(,JÝ&6̋ҏmЏRpȠxB8, E:xA&9,#t J^,#Hw`T2/S 5CWAoYl X0t~ r+Q*L!hZ \l {hhg`ݘSKTofp |Уi );QQ֋65A'܇U6&xș89z&ݜnrJ4ƽ'\ϟ?;+I\|P 189+2x9eB9x8NRL}ߟ?'oo `_ؗx'Sqҭ_. cZ½\y?x3_wK}_npmH[.x'AS?ff2cV~s?yyeQZz߬\_ }M~~w=t\:d5wߔ\I&U~_k6➹-(… bpd U cX+CFN2ֱ@*e6Q"<2vD*ZSyj g2m _WAy1&˻0ol8/.x*MӸ(!䞦0l,}>1] h<`^8~MFq/奰B\|ūoqs=˵ͅ}Լ/Vw.)?V_,l|E&y_fӒyXQlY>r_?^MrVU<<|YawP$^~ʁď?ns¿Gg>ל YjkC5#?LCx`&`F:>탶}&pۮctG9ϋj(g;6t?Z0ThOe;f d5& 4 M@hB@Ww <,JmU8<{Rrdû|w3+_1WZZL=O7B|>Uߓ_ݺGT(̯p=06tc߮qIxQ8g:qNyԉt,F[e7^VNn @ ] ݄o9D.Ɨ| TB(_ SYP\LgABӊ欵U!m|B8=F!N' &Qq$'t\#oxhTŤeGd,UұO}־S⩵wj;!zLF'6A MPmjT&6F/e5V]I>r3D.WIJp"O*d\fV'KZ8<14jPgX`,nYo5cGfs|%p5ZoOxpH7?pCm{iBp\eViE;m*"(45Ήy^5/[_s y 睯>T ㅉՒ$13J$|1 (I$sZH8pCu\n0dZ@ MO"0-"$('tv\8ONxy[5QdzjQx,]%$(jjZJL*W䅹KU[KGfPqVIJ-b”e.<ܤG#hYޗqЪJE-TWT("yZK1 c*|"|Oceٝ8y-NFr(QT┄K3^E&tˏH f&xi@LJ QJkJZ8:@ r'D}i\x^#9"DxJ .Icd jDΊMrD8jxOlǰ~|Z#]q+R2K1_9,Ш7/˭n-`m6γ/^>8 "bV4px=f]`|3rxx#X%I @7RHE B-p'ͱ}M?~ڤn?/SOvwLD }7ӶgMogGs`XvLkd*m]7_ƷVs n/t);Ǣ ]AVK` 0c;NjD \XjxOe.`PlR(^ (7b`I_BchXޗB }ؕ'yy 4'XKRF0 t$Dp⃷1rK.?r_~ĔޗV zߪ[5VO~ ^ fʊ8+>'R{ȺU\1ONnxϓ`2s0%RʈuJMVLpm@bD +g|/fi}`/0%jpj}ۋ#h=S; @'{@Wr8K3bՄ1mj=u\]zXGO fIA n;)ͯ19r' z2Γ>.RK@ E$i`5q{Z y1U>`=d ȵ9ÉWp@{S=D]أd1|zP'c^hy /nܢCR>ncBHw0cERfW<|4]P`H#ău5's̀53Pt(aEf23fqj2^Tϛ{Aei1l zTD Ed5)kATJ0yV9ψj jx_{VRe1ۣ-ˣ\, (Ήp8#I`%L'3cܛKMq?ǁ5e]`~s <9)r)+* JJ89\JXsz9jx_p_7~6z!3.8ߎ;*GUYe|nY$Mb w !0J8.l$*'JN—J}<Љs3/S֫9hRFmv(v)|fiV{PGKM+t ":Kf|,>l|DQN'DmM>KkΎ3b=5Z0CQuLz#EVaZa15N;8'>j&￿].ɏ8E-NJhJDf[ U`}qmt<{W-xy.D#0A ق,18*]\-sPޗ湜e12Lj5!IG~?I 0 u똊N|Y:\<(1t?Ո΍ \9u킩?/Qv-UIu Sc.IrUKr?Q h% jx_15ft8"P#ymݷTcL0-Ct$N#Hfom3KINIP:85Ûq̄AШg d9j|a΀a#l(9CE"[WW'% 63?u(eẅbQbc,B>tnImvBrl,"ҏ,ByW,˔G *F݂ޓ傍2:PCxcJfP"խ>$L0l>釁mbCYxDʭ,3pTLfn6n6}FUe_jc4+ G %,8QPX1`#yxL|X|# !0Df1h77+UDXLNƊ c:SQ2e_~m~u1טvKﵱG^0#UVYi9wm=4K@GK<A $ا ivO̯jfS*->,Q%ȏu=|ΆA?JZ|"Q¯ Bk @`WJGUTяu[Q]fGG-8v#tM)InD!V&̚+0|CDd%'&9YBq,F Ƚoff6$h[pv,ˀ@ã`{CQ^ύUq+S^ EeFx:5 ãEuQQe|O˸gyPh>;>IC7ٔe@E GH9IJ@j", {jJwd yŐ?I9Z6:[1Ԡg `nw]@?<_jK^{x=RcB> Qܒ.ri} C!(Oc>7!PkOdmfs` y&?"tG'⫗٢ѕ sG=qw_<_áo{x ԣcKݡaKukQ# 4M7< &!0 |]"tiG*^~]2OMI.W *ŋ2@HG:k*"$xoI;ZW}SOۻElȊ}fǍrãnJ[Z aI a覎}_ܥQ ~SPvtso0[AT:@2y)|!/*b:{x\թlFŏ;XJM:M`-׉ǥӞQ#hvKn==ZpQ0#Vhm+$|\ 1d2.F;*L%}z1H[[qMp|pvʴ$0q֌PL1F%[= `JُYp6HXElW#C,LtD-1շ=˫cԂ?n[u)im"u%ኪ`ȕ$Q(((GWvĶ\+W`Qpb]ˆifoo\EY4E ݏ7>4X>ǣz&Q{#m Q"^!ڸZNbkC?hJЋHXc;d&m6m EBF%BqUb4lhZ|V_ИOGNbD NcO1byZgC*(GdxA:~ ,|}ǀrTMspՑ}q,* @"\tV#}yo$c Ot(7E+#kZm] ؒVnU NI>)!U1+q(Vg,7:/D(PgU/n;KkF 25xJ;B); # #K[r;$B6ڃQfJ×R2#FHg |dBrT"&%mV\Jʐ*o<69,%6B%(sQ;Ef I?<M!@\՘sLGc:bv:%=*y^ "c@WOJ}P.cFAFAe8{{E޷54_k< seX(ب0,Ԟ@(r7&Q듻>b]Z=}+lTs_&`Fd'XpbvEDx7ueA|AV6"J=o>8nkr:״lcOqPķŷi) .+Kak;+ې XdÐG6^η`ڛW,wN׎Dl3,QNϳŷ9N<<#ju$H$KmXs6BbUM΄r9MnWu؞կx 숱dw+:vK:K.lK)OSmKsjN_`a| r߿YG^[0fu9ԘIy>}3`~*uyx.^7j5 /1V&G `xͣ|myqGh?3XYkQW8sڬb/W^~, ?IOLLww -UE=-Gt=\8?sm|Om@䤂PSyo.6eZ1m+ej//Gy}~V!H2k%F.WIJp"PU5I`0A+EHKxދ(FqqMU4Dkkta ? DhZT*gDw{b%"Jt7i$ڂ6I-c1W'JE{R>ɝX_di "ǖVUr`m{^%qOr&lߡF_Ur8lдeRiZ kڼ= _ߢ&?܉;CktcrP"n.<|ZA')L"Ee3J!W{b"L"i'2J_gJ2{r SHQR*Ig^;'0"ډr|c2I3a\K(#0,Fz ax%"F:R+O~QN!bnSHT QU#brDhn1D !XDQFY0 pRtA/0jGH1nbCф7a0bo2MDFiao l/7 P}r;?(< 6 "v`Ì|vT&\`el:|?lfߞIOkEi?_oXnvUirMb5m}ht }./=d/!FwQ`VnSXeVh܊_R%s$6etj+K5c5;R:F`5E#iC8\;JE=iLr.ɕP GFQ@'ٳCwCAuv=)c9b.9 {VHYPIΌ4 ]x˓AaF< EN992FtX@= K$ 5'cҚ oZfJgD!M7~ɵǽhPF&CYcPOl-h!Y'3n+ .0i3)?PCXp n&0#b2b3gᵞN0;WW%.iL`JΤ*Q|IAMǟsϠ,"hy>E:y:VsCb4&QB9lR&Vg|p@];~l-uRiLrq{{:JC&>:g&+*=( /=]L߹NScS%!8\oE1ECܟ߰HI2ѐzpf(tww (X1Mc #SZ~z;YVf:7Tߜ]#δ-2}٘ЈUBf3b"FE,s h\]'^yb0b!.ӈU|'!tV{)b"Fl1ũ8V2s47I!NIX `}xQӓ[@="KER+"m*J70!aF<, qB2fIrGQ'$QDUC,cǚW1OO2Ì{PߨkJZd|y缲\5EDV^cW @:{1OQ?!iLaX;>~Q$ƳGhQ2cN1bDwӘB\*(o={^4kޯP[4ٽ6쮺e;ҥZjl>j0"BDחƻ;w_CsӘ92Jcx8Y5;칆ӘD.gM|qADwߌ1D%\5o{:pbA9rtAa| q$|"Y]W^ae??:eJfLB R|B 2 V[ cE| = A$T1#".P% ᡷeh-^5 AM3z izfd@5QuK0y2!\w `^nS\{V\%ΫDZ} kfA\`R8PgG7$G1[36-Zӡ3쏼g0d6'澀050xVV¤y1[EMEYN1t7xhw:SQК[kmO8X0K׬KFJ{l|; >{1}bJnFQ-x{i>i{*%-Hz~ڞf{{ezGw|vT3l@f_s";\wru7M&Yn9ڹ9*]~{SإpMgKPd~l,NSN4{awe>;PNӘhlFPwUzm5URAc$I)M`$HF Z ;bLKUh2/",1}rΎ='\5T UeFlӪJXwcYbS]&KODpV&_A[+; M CVvӘZz z֭mCp^t ]z>)ޞddG[櫠XlOKrV߶o37^v/)A\N!Yќ˦w:wbQj,E=)n.Es% Ѿ ͛=ii#\.SꚄjPYR2 7wZ/1p6_31!RW~O.xptOc ()ڿ7[ cV*鏭vflDsii -!0&E啉W_ocQ[^y!Z7 `j`Z4*)fػq$Ww)ݹ 3$=_v:7Dnّ~ }%EM9s5:R,TX%*7ג3{qBsB!v_bxI(JY΋?d&u-L~~ Ȭz󙂥}AX z:7D4=dJg|>E<ud4\yuue_ hǪ)ڏF"iVğ3u]ITDSJf5U4moꝖry|u-ɠ@9jIU඗] ;ɀɍVɢU{4Eyˬ\b: f<_>}yp/~DøYnX L^sC[7Ӭ~m}߿_i0FP;:7 nbe|' u=UhMEF)JըM;Z:Fк::*7"UO~&|H0^[~m~~2n/S/0{a3;8]&EK*t?)IvOHKŇϟ`|·w۱>TJLLN r ku-4DN&0]A罅@g;%”qὝ%tSb4 ;! FP?JyyDž)HXd歒^.ujWbA06} 9;jj7D\ձ99-牵Dkj F}ejetw'!f?nH|kO5A޵.3oso\XmW70Ϊ]x،UlgU0aME"ou ݉Ǝ8;>rs'7^D]}~^{]6.)@UIXLes] jQ (`*P"!97M7|DCCg?gcu^5:ǔՄT2[W-6ҡXMVDloT~mU8Eێ`ӅJi&X8JIHℒi2l:*;|{LaIQ ofT(03v= vڻ{wA(p@o5 ZK?y!wL&qmۅ6 u\gGfٺf#Ԝ]zTioMP s4 jrpp$#(I|LD_<I9AN&ʝ 6Ļ7ppH!jBXMƘ|X8MLO tigwV_q5Lesk>:(~nZe쉴[w!#$W+C3qg ,ݭV"٧>pM&߈I % U`c|7Ö~6hE>Z9]xsPH4t9CV'>@k "G';@r ygGgc<$bQpivex++\APBЄ8QI'DDahŦiC|h%E.t$C-~> 0 X"J!# ?T,$[]MXnu5luL감|@\3DCBQ:P @:BR0z·,O7)XE$>8S z7V& >,E:}Yn iQK9m E"ty&$;D1Ģ C(GÚEbDc"Ap)4 XdZ.%w.6ՃLA\E|_K[%t~L0"8 q  `2%8,BF_dSzIhC|NJL]Z$ADр1TłSD)hQ{WW;*6Cl{=C_͙qC`wW!ǹ1unٍۻ&Sov IXPV$GBU"8fXmQ.?bX) 9eb= otRd247^}s||st5Bmqo.] -f!$E* eIH0QHa+mJmOb%'WΒkþdQKč]֞w3Ԇx AS U +}anw"27̳0:7wgoĩ*fHI`6QdLUH$)PljTHTNok[kdlQC<oZ[evueZ1D.%j㯰a^`o{HEܛA$LQiuP=&u|l;v~@[nn(H H$*ET SWUjQ_[_ū`ŕMg UŷQ NT8")h&[P+"9DmKg!z@dkcV~mvgVil=B!9|Ym?'.a88&O( qHzN!{?gq5! 'A6 Md8Q1 ̈́#xpuɦ͆K<ݣb (]FVmpmXS;VQ9a(vOx͡pvUV#K+t/0kѮSlCιkEͶ޷w6]Nt=DmOb4(!AVTzKg z۫W1Ȍ̕)JEI]6]i/sarF!>P,J2"Gi !Cc$LR D6u,^,O8Q+J2wC[LWpgy7cD" SX@R> e_V)@^em|Yk,莪ln#ar~ukHޗHSǦ0>+M6٧+Ʃ΢ΩΆxbcUvxڝ.8CGH?hKSspc#)K"'#Էׂx{$4"wq&5igeA/30ϝg&ZvmvoAY0#q~$>PM\l Gj5p$0Y; 2)Ya%D蹍ԧ*+F&p u1 *e1y B2ervuՕa/iǪ$ 4*DAD<`!Aߙߺ䮤qtV+L<U4moꝖryzr -Z6?xAr26m/v_E=6h"m>.!O&5.sG{Ӆl˿5??Iۗw3xϷ/g&:L8i{{7ee1z/Ӳ6 j}?삾Ϋ0<*sdn5xHqOAwKō$atO I 1K)aBpǾ')c',b8HI۞7?/ 5 Ԓntr_׽q[<+ | 7TZ".]w 5h(?,נJ&, nM[HI!h۱2}O12la(F떉h#6k0E>fJMոd{na\4U1>Mc޼mi|A$:jev|(6EWD\ 竳nRX #MEɉA4^͵h"%r2-,Q^Ր&u{UX\Om wSvcL)M[eƾ,ghv?B0|`d歒^xd?L\|:yP2Dă%cr!f.-,k]v}o'vngcuL%L{Vc` 1#B&Y'4e&"(Nc%K9yqRZ˚?`Q_v^lFq4o+qsTQ=ZKv?A$Jk6~(gq;‡{4!*[3m`u~Yr[j̵.ѣ$93#DQ1RDHH B*!9[2p/1w59]߂O羟`+.ؗ~ĚQ%*\*I#%gJ`3^v?խz0@UM6g9w]7k{ OT=dj*Fב^ OA&9UU}7ϲ-okf.xc=3hf<%28ǵDvv|-kmlmofb8;tku7ҏe.+DƷIo Z%5-<(4)l BcM B%jA __cIgj6#4׽umD=L1e+}[\Ȱ7|ng;"3^Lo^7^|ek0߬+_]}xcŀڀ  KekgkM9; [TK CTifơ(ցvh%HylTҌ0; pmZ,fvFUrq9J m<@7D"C "].S>w5?Z LESQ(QL+5JJ*8_"@orBd9֒hu%MNbȌL8|*w kBJUG,`~ ͇v_9. .QLEI .NxaGLa#wPd]C zRdZ"|(rj-t0š!Ӯ~(dFlK7C#382qP?ܵ*-l]q<|كSyrraF=42ҟ\TR^zɥ+ץψ+gƠVFZ"KVV rtOwTTH'szQ cf<9P.Mo}'IȈcڑ"6D{))L;8[ %A`^h{$p!`kXj!Q ϢT(d N݀l(\P2vu2>0fzQwzar) K |C+6Kռ"U3CeB< r >K[BIK >6/ rU(8 =/=ɢϐRg N`5v\.jĥ$1()JIAD\dV2l,JTD]O!\Ctnؼ( ɯofMG1lId&Ӟ6<$e=3 BQAJ? VZ,`0kR@ i)>5nfTXHv2졑"}5`6dp*YC-sqIWXaMi>e>/*(ӎ AKJ5.<*0b9`/%װ~2M5Y--nf3n[<<"ʐs.[B@{XUbE ZhCSYnE#38b|&HGLpij%WEzgM#38:;l9 J( i_1&U9(@¨y! dOIC#38$G\C#389$s7F>O/>\+:(3FLx6mYV& `j*_ '>Z//#ACBeʒ5Ba-?tV^R6+߂QO|PUY:X+QA2kC6Ⱅ*1d{Tp˜6c&/AcjA K(`*e${l({@| T5$h&]0F ͷ+z-JCieBn?`IAdQbKC/ΌDtCJXɣ9xiGoOUgvĀNUjpC#/8d&NS.ͦ6SmaIp,z ~6lfFXM뗃ˡsR:M[Zc,AUI.DX#r$عׄDF121%zhdG6e׍G8NPKjdeX?}l`3C!.|iRD#T1Oͯk5N)3EF[Q;*GT$cl>upt^4nfUnyӘ_Tpsqv fIE4!3fAdhK"/4΍::I\Fgz3OـuC#382%`EN g| hyC#38$QN9{hdGl 6ä}5g_졑BNP}+>Jb[D1B6 G {kΐ%@ylA_)OewȚFfpbmm2F]:q e7JyKe\~%2CU $a=4N JƌYT 4QPF Q ,y{9_ j`bHIy1versF2 /5//|WÀf6}n(|6xwaSt~fwB3ӴnOOv 3o>506Xy&wSؽ72 ^o']Xn=}?o7} 8N)[S]7^6]o̧ßY=r|^.@9<uqن \/%3BIY`h:vyUE+Ơo2 *O0}y݅Oۼi 5?eVwVz&϶\f{K? 6<]vK_OZ/Ȇ;ӀԜ0ym}ݟnаWz7?W~ٳsj~{W6Ւx[֬Mm|qs;h5Յi>xs+x.4f}'Lz}y|ylÏog}ϣ y^ϳi! 9ѓ\L6l/ @E®=F+#<{.>c5y,W+ ۅWjld m%qη?/& 9޽ޑSFyY`+7Ԣ)*C4ãP۠ x|bFrݷr 0Λqusg/BrRi}y}Wmq9~nCA+ ag(nϖ& S2mÕ6ntʼnx%9%%8vQVZS%flwߛfK:s >ayS}KOd{j#+ I0~Mψ,glHzvΎ𙡷̂>z X+bņا !M.0us}tfޯwwBlZN}D/WJ71︛:S|g;uao3;yyn%t0]42.Z\1 X[%yR( 89"a VҦ(Bs(Vd%k]xb6~pLz'V[Wh=&Tz)VZąceUH%yjla96,a6=Or߰_0KYZާN-Ytvϫ'%uЯ |jJ*/#xMW3!,I߼w(,@CSK1?F,WGּhBp( 5+ '_4#56LC; ݯ(o %RylEѽ|=ЫJl Gjk M}+U_XX3AGbL}ao5.߲ɓ |V]"8D [PEb".% &v-aD,DgS :C=E=[r`ΎO=#9{Rx aY2H7f'x|Tr ]*².F2.;˶mlpdaj>ݵB3#hww/ۼ;،=䪻9h檻n檻n 119f oﱎpѕ$,}J¿f~{5)}o R/57eɪ|z6 >O_?Lx={dO#'FZ!o$fE\luvpzOtv?8>ҦLg:L󳺏|"dXTuV ;ue$&-CyCXgt?dׯuF@JbT] q.P S@a\ v @a(|шG`^Y'÷*a0@ˆ!N-!=-0Q2gF[nFU?owPj:{2o8Q_E#L% K?MZG1/"5Q( `D"D\D*}u.ڴQ˯1ԥ,8Х,R\ʂKYp) p 4C Aj8m6/b K9qڥrRN\ʉK9q)'/D xw].w /aB9>fL,}6hݪidg:O eBro#WIxRv# Ӟq a (\! 4tԈq)^]yNjC|t[#Fxq2}6G::9crݵ׊2Ht* Ro-/k]q])c$? V܄l|ZBN82}BrrJ,jB= ;w}wR-ܢm\Ib;_L˹OAF1F0 fJ+_`X`H%r5y T_43cOy9zov650{gKīE=F`ZRc"4\(0xx%}ACYQ.DQnj\ЗnFX7ZG4L@6@@y@B{J?]J9 fP©1Ddq+yŸsF⪥5Y(Fsfʁ hXF,qk8xPbQ(Z3TcQTۘƧ!S1: p%F0!e_e 5\ADJ_G& (cD}B1bT"yB"I ]ՂsJثChC31_;l,-!)0L=ɻw_`eo~᥽CFimH yQ_9rTP03M1ɑ+M^",:z۩ra'pCb}3|JþaIouB[4y䶙r cvN`w; #:5SIњWm?{XZݹĵ/:wxW^֚wm{hhZ]z68mϥvj;^9k}ilVWeʷKi'Gt s>MiTYwqqÁ 7y?pS=OzuY@N~JPbE泵OLgJgu գBYYuې(@Z'9өvijO dϞwLk '<χWrK?O\]Ko^YUny䲜t >Bw!{م \v.TP+p>5XmHt>BM*0I%mOfa)pF.Hoaku0SϘF2`R*LlFdXoNÅe:fQ=NfUBZ\{!'Ba}J¿f~{5)7WF~go~|x\%WZ\O3z}zLY0.KHg> og1dzSRLg(߬ F'V `>\9~I̲QJG ,0'sAzMߎuz;AA>k zgzlgq#(59gP;V(j5gvF1ZP)):<('xe7`/B>^' }^‘_^V³,c x}V9Jy 9F Xks\k2Fb{{ Yg7ڝ{O6L7R3ڍD=QDHDhJOs/%?ӕMӎijóm8&!*J׾ .ԀA} Ph!8{gvYU^6q׎B Ȏ׎@==s7g<u:+^|^Wjkl⛷OxvҩEK296…WXiedzhvnU1eՇFCEaJ> ev&#sayѓmHUe[NGZr_k}Ğxˬߡ:!,j 8q'aXH P ^Z`~ۧ=mĒbȫb5kcˊ%x[1w8_ mV.7O}JIWN=x>y|I+>471n 3V+f%xohf4\a|5bfz* W6b{Jnlaqî>͚yhoftr&>f`o蕒2c, H {YΊA Y(Komlk}V͎|ࣈgnlh#G̅̔UЖNGncI E<$}<$iru<#g|eC:|z9t}!{ %{=7jtXvGp&j9 (>,ZmE{K o62j> i6XjGCgnŽvxq2|0cvnpgx i2uZA`_hh#GE2ͻa[ǼAwt@]o`r'_$Ө]^Z˾#W6 kTY,Ə&alǝfFy[Dk\(Ɏd։סMf D~EY)KUq1W2WܻLk4}s i>[?LΥ}[,r}ٝDJ͈O!g{LaEPcc͚"HI'a&8Rx|m՝ ;y^M5Ll3yjRoA43x,⸑)"VEN\',,Js?:e6Q6@N6m&}?-ke}ɛ1|^j=[V"Wꭖ"qY Z>uHJ?JQ eKbQ\GOWDnMpSnn`7v?$q9<].lm?f27{Ct^4ls7ץc;62"]w `~@!kʷ6dw~7;kOVc/,T6bfYn8׌{[|a^?xvG^#gb(8(rZjh* *A !SJđ2 RЀb]`4%y4M팥/"n~߂=VCӴHt!uGqҳy^ j>)]~׺[ iK#ϮM71a{}Qea;MxpdMؔVWʎSyXV[FWTďL[_u~X47ȏ4ݺo`#hVF29l$bnCd ժSub!130 PNa~V8`0 R0 򄐑 J.+$eS_>.qR0sa@q A #h WV! #Δ4\pJ7){"Tj8N_]HmG4l5!n< i0BHH@L$O @4{ r$I& LTʒNT% :C=E=[r`ɐzGsl4#zC;|bǒu#ڦX[RL) ߬ &"@d => uѓ! 2CTu&,c1 AJ$.S)@5A'1:""e>I<3XZP-ʥ-]VfBFUs U})]X!SIXy]4^FSqloL`Շbu]P;fƷW 1XS HqFHʥ>"3Xj q?T|:SRw핔8^ 1R`C+cC !u %Vrd7%ζ X}E&a6z^墣ݦ:lF&];%{O?/ZBi %) rg%` B)^ d"^kc.a۳SluoQ-c!yc&8m_H WXrw1$c y}̔*e(&1d]d y$LI;Ŗ(G1i P$-8 ˋ&ҋ{F(Z=TF4x BuyMۣ뎽`4|:<%][*3  ?=@wX)C> n+C`j5=4{uf(>փۡUV;V18^;a .{SNǣyދ j^qoQN71`؃@!L{b}rb4G`ol/S5z\H1:H #"Z/=ڠ~86Q[iob((s[ch k n׽eȾSljݾXorɾ m =)1ynO=vC3=n2ck7Dør{qޜ$nb`4S-ZrƱz=i_Hu\fm4CB NSU[l ruE{޺hhYO2HhO5/P@ڣ1*Rdbn($$s/{ ;>w5֗=/xz۱/]}nϚ*gcƯ2Ysbj?{Bަ׶Nn޴_r3HBP pg~VoC}Ѓ Dެl!XjbɅL szG>-├[b۩yޖHx/bw M /9N215f_b_5Km<,xb:6 1c屸HrJ41ܧP,%q%v*u8[ա y77d]gY_F%i0-XYq%{kzڵeZqw\7#1ރgcufjоO+n` h`/Mq'Ti֗q%E u-[3l0,m ul&]㛸A 紏݆,#g2|WNȘAԶٌ{YNq!vSz&9jP88Ra:T,EKAS!9s %3cgAJS>]lnKmtˑy8xŚ}\c]xq?vh,̋;F+204` JV ]Ҷ^>z@>]OCA?; @I8@gX.2a3 C2 7043Jղ?Ec~9aCJs;x&/uf?)X[7^7/.O}wg%DZI&͸  ۏ 5/ &|0anoM{hlE|qړ~ =侅/ "[Ы7̵7DuYhnto$'VǓeK+3OL^ʯW,_U>fgu y 5Zb\ ÌD{^7JBqsDO*VbkdMY&Ibrf]W| *[Ń[t$1+h|ӓ“8p6R'ɨL' MF}' ꄶq+bUyCqU(2nlXwBϰ j S=әQΜ1N5e5ל0㄂FZÝÒYCEZ؉]sR6&:M.r|ly|?wNI6Ȏ40CJA]@iņAah1A,,i2x6`+ v z 0G(Zqޙ졬Fye26Z5+ qd#[ˤBvXE deTK0QaH(ͱ,Y ` ]؞I˨iQӢiְ̢ALylp^P ) YH]T@LK<8#\>&q*C.i _ғX"nl]q(pG.E:eȩu>0J*eJAf1^B]X&/ڳ>0\EOPQm fep@8a %Jy`h@Xu!`!Sz~$8@[TZ}Ps;ߟX/֣jjW>djBYW?Ұ'0SŠ~#1:6Ryݬvhu b6cO%P"!)Jpc%"}C_]pT|:SRw_?FOw\tyQǝUmRߕϬSb$>-q+)P!$aɭ Zr)@9!wV $"鬓Z3pAcܞ5ս ॗ.A Up̴cT3!LU_!*i#Xk*q6Rm Mc뭦t< r :s1J(I&Y~g2*uX+`rqLx zB|#uxʾ,wxpaTD^s*]z fˢD9K9}]ڴQAz 0Os] @ZZ:[V/8J|SefhfT̨[u=zuʊJ:#˛iH^]JuK6Ts熴odG޶ tzhQf9Y)h';,Zc1UL_}7tLJ^ӅZ IUE= juxWMKbNy+!ce^IuYpca GNPiYFtN"Z̰>xd{ y\qOy;8`n88lFܰ v^Ϻ ޻1> e%MOvjTuKUaӔӘ|d|gq8&L(:HBa1Jh kivӥ!dJK1#[60JyF 2K,R`cN;AC{(ABpuM)+qt>ُ@N>WadO3t~ǐn)kj$됴#1YU|GfOd.'FM?-CI0΋?ٲDž%2d: g ao`';K㬋ʕ"5ab Ué4E`\BE}{QN̂Ј"R#!!*갣 3 sRer0ʠ y_tr>M>mu/%x݅c>9'EDzIG}V9h5k;#Y4 d_ɒ),oj>&iTyKF) +cP^Cµ&JUl4M`fGP GVk a&2ȱfCIV+l[/Ǚ]&PQ7kj*/IJ {X xgzɑ_4tyaw1`];O[S*I%bS);%H[)ےY誶E"XԟYGu>tY/"n`\&/sn:L?~hbgq-~-/>\1*}+D@emW\eUm$(e"k"@r %c d䠃E4QzDQ9VKm>P(^WzC]*;tUۡ,t%~.)>ZcL]5 L(A՝vl?tȢc \ ' *y )XkN\#/Cu"*:jH\zj''$Q'$]:Piڥt'<:$MF*q 5R@^E}"E]^㏤) iDFUG1X*D_ ^GY=S5hIǐQۣh`Gi}4< s\ۏ7ruP:gq7 pH>';+î7s*t2/OWlvn&nZ1J~oI(%ޒ[{K~gN 졅MD:B?/_bJC(/<Wo%ZiY_^#XZh\ET 8 _#L&kBl49![8? {tցccp=p$qɰPH_&7@M"ҕ52P$,٤ypT& .:Bt8兖F ^KxI/ %A$vs4Op܌C??Y˾g0L"!426*ͱ)x@ 23>D{ը<!&jiBVß3&HmyF/X8BdiWwĜkh4{sߖm!TWBRT%CMGLd<]Ûp*T^kfE4 (20F6RPOA=Qs,aJ يx4.Wp |D pI9* d38;wM7e"av_pvF Ѳw̱KݘNro= \jL:ͿBU1)r{p@ :g(@,J@mɭN*d_=v2{x1iI=*?Fdˮ=W͸7Y[hA3ݧLn8I҃(7wiyuxqg=.aytcnm4wRQ̃mOf~G̭a6opӆ;d3x6tВwZηj'_oȺOYצU%OhgWy)oUQ;|ұ~sLo.~AGm8/`~ͬYdZ޾Î1/F_h41E,4O+FsT^3A%DDD02W=PIXnaӂT4z SKmmu!@<"\B {b)2VddhJT5 $( Xʂ*g(nk!c;EҊI;@(z,:癝;-ށ78@T6 DKBR\\דx>SYSj;!5 j/zs0_Eҧ< T$mWOJ[+ I9{z7]秊4ޡ=?ٵy-̆&P%8af=knjSNQB&!4^G\:hY?l#] qC5%hȥA3CMP3"),ᨀ0>ENU>PG__n(3!T  AL:^ !dRqJgM\m dY΃fOQ5dm3kb~Sცo n0Ç2Gn~_g}d>&?E~.*owv`f%x6lZ|o}eEnܼI9١&" ]S8mԓt\f)A]h6BE=5 ZV U 7ʾr|eZst{򂚁'^RSnUj:uSPC~2 o3cN/٤Auضɼ/Tj뼈>IA_E_{BK:i  oOb@./[ kr]_CM59q&qqɸK uru묏A8IE ,s dyͨ*//FW]ZKP+1c'`v"9 SGT{RZ{…sa=76plI@e$gSG'r?ǔT+`5t/6K{*y:??8<ۦRm}]1_ bJ& V#~~_rYtY.B*NP-fQe';W傭p*T^kfE4 [QVa#()(f'[sqwɚ3T -Zft2^SZ +A*[ lʎ.a4}T}@dY1yY*_'=q` D狠9@E 61t39OpBOZm7fW[^7=Fk=:F4@2 cIN"PWJy'CZe/zͿV1HL451sD Dy &g F8UiZByUbvWwv,7-,/ul]>k_z=gF=0ucO_E]d^]5 Ne_2iu%{Nj>Ysi㷚5r??ͻ[==|=TQMF # p]q̋cfV,QYZCV 5PDpJf/x/Y*ZsD C┡lE*$X1$R*&^dRC'ۺ-w;wstQNʎF|Qt2K m`;8zK%8 A1TBHI L#xa^%!N SјR@(yT(L!/YdK(u!@<"ۨl/]GX 3" @85I*ʃ ༆(G=Y[E3,V$H)%%BcY<o9}p_\18^p*XCHC@%̬֩=;[.<83g-z'9 >czA9e"[;R^UXe)#saZYG"ރԻdHk. j HZH@=1Hࠐp6ٻ6n$LK@*םWn\^ˇυ̘"i\~!5I$;t%~ n4c bT ʐ pby'm_S #du|bqCl 9[m֣wL&s,)ϵq݅,&$ֆӢ# ;d uËY_QK01Vk=ʬj79n&qYb{{B:}^{jV,fKd=w]3ݙRAK@Lyϑ߽`#4$m|3L$uUI29Y㯫=JGw1ƽE} tf/YW948oRwtֿˍMߞBϫߚ_K'bN3N$v 2u|Iߚoq-8T>9!OrQ_jt=W*--*d,ȏs/?LG|t$c#;s7t41w~-GAw$8z1 qt5W>Bd=G)|܌.?q艭EV.g{jmzh`>\ik\}a!]5=X6l3e5u;pSVQ9^7;a72t8l̛W/m0^.ڊ˓u1X ܞHnJ;C}DYW0 iUI.g$-+IAeV%yժ\!MPJWhe*{sQ9FSۺj6g|CsPN>̹8m\mXmżݧoRV8z10`#q~MЍ7eW !Ƶ̫Ux4X՘|3qDqT rӨhۤ3j2Θo]ÁSScr(ǠSJԖN@l+?vz^_=˟xū77=}`|} +00MSS& 1C[ mJ -fT g\Y *ħ[1Q~09tȱW4 !k09+0m.{ J`,brS]m褔C^I#Oi'͈%@t,2Ō\m^ T6<%Z{}<0zL . &2`ǜs\V+%.SBKkP$4`S۰o1;O\qgu'čd :cYa[݋ս# ;npiس|7z^iӏvRmt"~/R]NW_~ )sEQЫO[ezk7Pp f%Xoyims,.Z< IR]qV{}-)djV;;^fe8FrG`X%ؘ<9??mDޯ>]Tz֭U?aAU3lxi0bȖ _0~:ʏUITMhV|o=:U2Y*[V0 YǠ/EB^Ly? lEXί-yڻ!kc\}걻R颐p1t豚޾z+^]oWA`1'|jF]< 翗(Ue kd3c!#ǂQFA8ܝ>x'{ G]~RZمh,IN3T.`D1蓤,|LDYћ))()9ӞSE11a6 T&Bh5=XT|êVE~*m^g]t>Á9s2_nO j{PmS xXs;=OS.;AnQm^~Ulxݐk]oVMaÌS0O-rrYd̃`sŲ>H:~d: by^%|ԭҰyvO$ %t\.4wq~rOI˭n/ |Kʳq$e` ԶA=)eafo<ޯoQ{F}^o?J 5&0Om)hQav74̫}vtmB?m{!1M"Ci@ּS*u<䀾yZOXڃ3CQÍ㤀oO _kK ע>kP ~|{, !7#""%\mO@YP KPd\@K9eq)weYA-yiѣ'yu9eHو$Z+ 7֍)/-s [k;{o1:>p~rאNE\ KP$*/ZT)QprO`B>z^̫3 !f<;~Ez;㰳%v.Π.QC1IBvWy/N|L^(soi㴲Eo $"αE:= )V)A= ^؞[S ,0PWNYg{PxK=` gSZA\qӸ= ^sLKaQ}c@3^ Ge=z(Aa4ᘡ)FF@'L"!}LpIw>=!^ s)kmH֯B&[ŀ$@]9pֆWE2*8~gH8ؒ,mQ䰺._TW3 }X@ա}(<Øbs,2(mPK1'<:` n EW R 儜SX`QPy~+Tb'!&*X`J0ψ. G>Ӓ Ωw;} %xZpW3pZALEp5r28?߇B楔WVAQLL ȁjI4&J0/ f*ͼ:lF&sQfIAQaP(Ff>E(I#BΐQ +%ayBCOf3,G!0Y0W1%BNPl$hn)qSSmFOJ0o;f$ i$(.'%0q P(¼-/TW.1HjBJmN1@脁J1!PU )q"XGuvSyjeiTt"yi`3@s&5q3LŎ\w %k["8օ0|73^*MzeǨ1"f%(iCO84 i 20jߑCOh# 8&xE /AƔfPH~ 0vi<`agC|0`30M$SR$R.l> 0O1&u. q+!;*!0<7w0߃BE4j氠kÕ돸χIPy"XP&.@FsB :ױ#00{)5]PyL鼗#@9#Eh|~:,8:>J0xqx$GA4%g)NH P!qQهB řLSD4F:H88pseJ318́N/]O9P(J0/) bE >#Y^ASQHA#CRJ31QJ R$Ah=x!b!&PrP(<8ҙeZl ,hƃ a)@>0oDi棊PΔ"k`jA`Ȁ;tSF=07 8dp)5]:A eIkMHDE$$;>J0O( rR`Q .DX h fUۇsA!v\1q6VQƁo7_oz}ݲYGs]àwSʛ(8(1$yqC TBKE,YP,z9K $o k|O\zLEm jEC_|ū~Wuڿ|-}JǥTED}Z!$\sS}`=MRhpj,"Vn02IdB'\ĈVDDj4NȆg 58+tsKz-`,ȒQPYj!:V`V (pi c fNJkv)!`-,<'IBZ'j a'{Q$Af`W6\ V7[hxYwLÇM3h|-LȍH|W3;OQ"Yc#@4^zzcyd8W=qInva& )^N~e3$\3(DN8` U"Ò:;M[?aZqY?2N9_lvx[9beuG>f}+2K4Ii@p5XtY Zz*6K~~o8=" (*B ;6v<6v<6v<6v<6v<6v<6v<6v<6v;ĺ|0N`2u>m 8N^cR4S%K.wPG{6qqPV}'[!`pJ͏c~oS.Ts\KN@Mj3R許cnvQ1bs ^3Pf<4*Sʸ DLB3ccI2&q"Os)9URaI}t mXn Etκv nj&ܧ:5Qf_,|Z/??s<ʚ9 6hXhXyG4SPlG=2Mf'ke7ZjGjuNvpP s~Ų0~daspzG2wѭp^([K J1iq {@xT[WhmuMYҎb"FjyύK %Ѱ75rYWQIsYQ5QCTVڞﻇ;5*kး rX2OMEeaDrFe M0Hr8rv1TqP5ȵk,o帩]mήI,sU߷ qJgoLV$ 5;NdJ֣qcpI!!:'>._1nNrfH-B:2Vxe5_)J芢Bޢ~:pG;O)"XWiw̥lW*GF9 n;:^ڔN-x5C;tWȇ~]>~zO璔˛Kw-ö g>zOp/& ӓ)>P1xF'IlSK-ۑ Xg9ۓ̥4cܬWv}+8kqHZ.ۆ[Yw!'̫Md<^.tu3r8ogh[GxK&FmbQԽԑO4Ek03,5\>$0bVcC/jE'jJ:yNC8iF,!Xf9(f\ȸj{\:s[/GQ=K@cq ܠ](c9It )R Gq, (RP@-8OVՖߪi>pqLB~Kw_.q.vZp(s=z+Sv;|^k36\:b1Jibb 悷y$4J@lr0kQygV, ul3?F~2|u9ZT7?A!I5͌ '=kn.>=ɽS&X硩`h?LW]4Ot5j\<, _͚ \Jd--7w__Ճڕa6졷t>dm_XЂx`eӁ( /ᤀ5lX}n& נk3`}Ҏ|Bs,#'J>pq㡁yC10̛`0^%8[G5Q8X5ԗ睟3Hjj*[ 8鎫,-F/x59&).~M3)j:z>Xkgl= (fZ*DK=;iE`Zs"N)} 9nucZ|'%{Vqb&J!$p>}WBh[k 8l :%nU Hf &"8vԮ *ϜRaϤ`u]j_dGa5?ڀKa>@1s2K$EMDnd1a~6`+ <ځ4, wZ0)e= )gJ8B&#?xK$X'Bc^&)2W=3HJCR۰-3곎_UWW!'H еE@=rl4 {b@}dQK P;6[].2ց"n9x-HBO9ʳc}zc#Q$e-30Ф!#BGD[ŏ{K~p<9)gkc=3'c+v0ӛ:q61탾Qo0p `?%4`H K Mv 8PQDS.<Ҩ]ӘG Ac,6ʌ ;6`#*$;A 0Utȴ g3?^`kԹw+4qu'ED7d|ܦRy6ٲ*Fk0EǸ.qB >)np0sH[]D‰uXTN k"Y'G7.Q%nuJm`k.PHx3jg>Ox-RFXp4'[Mj `*=+MW R 7aЙ{IE~C .:߆E.e=c_s:Ӈ4{gϧLwvkmIMu}}H.a1㜤bE*˶g ޑ֡L. y0"qYG9 /!PT"BN,39yrNɣJ %X1ʦb kYf@pPZuUR1G Emʒ#"( -J)a)$mWek ],I1ׄ#L޽x>'gȺBGG dobx,# \@̲hLa(CcJ(S 1+C3'b%b|mnP 0G}y o5c-0DLjN#yːXIAlԎGPNGJZEHX5Rb y$,ViǬQZj^j2Jnm8[f@|}fE4+@`v<ςY7|E+ ͧG+5lrʼn[P&5IhyPb H`c|@iAd0UJ2Q#*6@|n'V<݌u;0TMYVtR0SLvI Xu`_]em?*˴4!%R-8лH S650 hUW|&n2J5mHCaH9,Bqo Eb.$&&W`,%Ufvip)|~o5Y/tv~3Dž^`,=.wm+W =Xsɝ<"a~N8$|k5Cz)1dd\䃙tSy2{R"/˃1Ί1_Pv94ɋRL݅LB.&SElU  ʊPqkrTb%`A=z( iuwLg>32 W VZ?p9폖ʛ0k.X2vR: [Ǧ Qj>qx TH_Cr|YSvz=]7ӛ`b7_^TkK.8Nx7f0_^f5B' iFk"Fa0,e`ZŤɇl'}viҨW:dSMs$f9?c3N;1h`Q4gXwP=o꧷?SݛWo~zwzW~~ZoRUyg A'tuMۢkJۜu+ڜrCه{) ڮ֒(}˛;qVDy&X ؼ_ZM8B! 0n@/S:U6ZiM<-x?#xq%8U c`vl4 ^?_5}%AF=iR;0ot֏pwmu`bMXsc$}JwK2e*l;2E_OC"juJC`S` vZ$v**)`5p1TDx>jnF;&&5]N)ˑ/2!m8s0h1F: "A29HysQ 滛l p|EExeֱΑV:-#0ֻKΛ|թZ_juEU')׉>Fv{I+ agb љAR{6o8ڱ?cD298^pauR,[<^),xWN ~hWy81)#' au^L9L0+Q/ƵԶ-RC(򅖜N)bx@-> Co`O;3ݠJ3>֦) |[&͌ƃU}lf1jNdE?z\\s*Ae$$[2 qJz{3bSrdT'9~ DS {S&zs)ĉ" ,6uki>+kd=cbR+2|AHnkzqfW9f/{; smڍxWRn4qn;#0ubAQNV' @HuqFFߍ]aaׯzsjM0v/;,;&7) 0Q4bi%7iQh 5(@2]/ÕJDhc&[JΙJhghI\N*Ѷ脙.u.D 7z*93[<ϓA}mv0nV?. dhrbyU$Y+ΰJ˙U԰OAEրs;9ꝏj(% 4!WR0͌Z(LImLIX$1I"J IMRl,)vGEgj{6_!-nh|Ir-E_jDQNYR6eY,9]7%.;3>qz!,$S{+%Rκ󰊆a\py:'-iOy?|N)t" ҮɱטS@aбbb@rIKJR>%rICZP<̛<ܢ\|(x#qç??-^o}V E?_(Qsa 8KK :zJ  &ۛ7CIlIbRbR+&e(d @犮u9 iK,|N=xSTԬ BU4q@5tNYbW8K g 場r(p/9yx ;LDVsIdyn&ሥߒt7ju:Eoǩ  /bX0Z~0mcho[t.@#\$ Hr\$ HD5 953g̰3zfX a=3g̰3bs ;;S|gL)3wߙ;S|gLז)3bߙ;ہe3w:3wִi;S|gL-KJ멯ܼ}zZEszSWlvO%񬌍<ehX,z̝vX-"ro0)a <9pd7`GWD6w/o`Mq}J8|F Ŧ|56]'3N Dht?h1ihtJI@ߖй.V,SPXʔ<@twAnؿ4͛]OI{E@2ѭmt@d +RRg2-V9ǸFx$NQ΂sEJ:MD:Rbt y$Qc(@]p-[/5^#:N;kfNl0׍ `IɥKuu גt29X/Zo78?iy, W47|,93-Fk H`c|0҂ `e`H=ETԉt 4>7nidP',3 !%1dNF0ĒQi0ZI 8P (O xjAtXC?4|uʺ ,0HXKM0ޥF/ra558hße+ h*״"  aQ"9e+~pN(sX`"iRIYUMYt Yh">b`2=Y_\؛˞oZ$ KiF<8 1 X8}L'%e ?99;29R=u~=FP,iJE1V )J7y|ϥSwl*\q5#xuzU`dE(85!m,!J̺oA l\9,٦?LPsa'r1צ6$c)d_NލW[ Q:vkMN߭qA0|_q6g3(U&? 4>hNOӌ𶞮9S}bypv312_/wy~Ȗl&` /L*^dF%o骭ڌnmf~E?mT$4~N]ڼVJV'7j۽QkJka#a߃K~"͈c\ E;v .J\qD{s~sp0{˻6~쏟t3L>{+:߁F%۶.u~ۮbi׿>iUޤiMS]N[ni1S:tSZ+_o7K﷓OZtyf, | j~55{^qfpW6! 0f@m^.XQc16c˘-rcQz:E1]l8 6J!^ڠ YAIMÍFVŚwm4,*;TiV9D4#*(-P'uU1TewY4rҼ@SKK |PkꮺF咴Tv0Ԥaڵ ᥫ1{VB_*!?j~V8!vɽ}9`_K`hi)1HxOfp~RL,8DZLq"rv<‚'Z@| a"{SD{RX]bliwDðqcpYwG`Cڳ(ne\NhD)ҩq^7z䫧47t򦈛0PN]:!IDc Yr8Dϔ5H1+o';FoH@3:hh)1ٮ^~wm7d> -e;_Es:|Jwt I; Ɓ)p9. #%&FA()iӘG Ac,6ʌ ;6`#*$;A (Uttua g}z< Fmسկ' >]3ssm3)jqE,Y D~lmu\s4#G>8ӌHo땽y2K8V;/6 G[o*(謑Y# *,(<<׳.;7>Kz0z49ou|']9_ih7SKU90s,M֛ԛe<&C|{Veﴤ?WzdpV2Dr$d| w{43+fV)Rd5ENXY)X)VuD=Sː^{uv+}Sr]Kp$Gt#oۅK:=>%@Yl-MgKwގ9[V ^>c7I$U(\ߖ*9 Hq&%[S(qc~{E\pZzN'`eZ^w)QjE@pb򎯘RfDr=s;7X_c[4S'@'}m745YT':y噡*8Ny;8`n]WһBSQ;bD\"o crF4]hJDt666*}tV7N_RjT{W;#cU zaynG ▱(;10 .fdrH8+9۰Uo^:6bn-=l\(wp)BSϘ8P=t:Fkz4‚S?:zmV SynX~/\A=_[y y_|b%|5$:.ok]gg{c߂Rn3;mM#s`}$&ڇ6ӔXK7rxǘq:b3Tmg ޞB 2P/C 2P/C A' &5:z_5e zhi<#0P"%3ĖJVFC(#+ Iٖ> x6`5ꔸ,WndlԂXǫz~:Ep\~ xB qqD3Fٔ7pcX9띢V:H #BҔsL(QmGsȰ, Ƹ#aRʥO tk}+ :#I dftG{"wu5>' y<Z;3$p棇g;(g.yiL; UR=8 l; ZDlAeF[ 9A  cQd Ug|>uO!wCL=)ہ]xĤDiШBHdY?<8(,PX<@PIYݳGki1[Wup0Qv'0=굨#P!U/1!Vjc#yːXI;8E9 JJ!)i (WKM-)@8FҎYR rµzqymkitv!_L%zs9l嗯^W^oߛOvW]ILT7.l#L1 $1>DXiAd0UJe`H=FTԉ<_Y w)@54Ȃ+HRr" KFA h%1t+GAy[CXP aqb^.6YQYa (haK$L/rawjp4¤˟a+R[C׭ښ0@F%US[7H 焂Fd,@(H?T#Hxk6wYsgkz)"_87I>6N ӉUe/ub$ٷ<}J3Lƅ^3i׫G.՛){?/>dkd:֞ @HSPJ+)(]4 lb:!8}犯J`P& -,I6w+쮿wx10c,>,PL6 _O6BT~_UWHׯFKN#p&7 Pbh װ%*V;c*Kg,1S|v@LEgko9m^vכYq  @Զu͐ffͬ.> G &*|4V. zaQmmod]uc4YGR_GY:8/51`RkyeuzkXӻ>y#&~|u ̂Ku] .:pm5 UMS4Mi. ޤ]kڽ>D,٧Ph{n DכoԥpNU']:-w=XHAf3_} l>(t*^uW%5"Dԝ Kq̓?clՑ݊n1[S( 4;EIV7~kXw1md~V;GJQ@~b׿Fe+u`bM 9b1>٢Z;L@ Sty (י",: 2:P-aƬLb"ZС ^3IMDwP\BU^a_aԺUԙ/nkML6`4V ŏKHs˅rQa]]YkҊLrJ͙"< 50crw!->䃈8`0T)b+t@Ejd~sVk/;^s%%c"If=p"Z?مXhWC{ll;u"8g@Z Ƙcl2@tyE>ג)E 0;<@W V灲Nn~Kz8jJQ5[iN`P۩x47}K?-(O9-r͙A6ƒEnC@ݸOCE;xd),rh.tʴGLk4FR<$P(K&z!֥9E݋i˼}gܟѶvj,M>'>BiB[>>?+-Vس sf--a,7& Dawq4VX@ xa6\? ث9̑^e>q,dd??e;ټʅͶ rCM%B#,51kW?9we]z4ЕO- t<ZB/cG/sě5~% V bKiIyi4RHBiNZaaG:%ZM@y*'-&x#32x5sf"%ck-VƾP, y' * nm._6Nݬ?'@nї~sAbhdäOP$ fh/Pt$LEt!"5f?lI+`xN Tg !hL> 'H @ )_@=rAiLv㱎ǞDHH:pFBb%#11)XT`Ir8a#ֱ6xB㴎:N;&=i;Sg$G6f|aTn3<\q? v(h0\ +1əJSaADUnsmkAq{Ɣ lsxie;cvy:\Ӭ;3V˃'S͒n{tW {vw8;nۆK<]_(vRx' @^xWfʹ7@#џ8-I}^qJ{tAnx0څMg/~MSHM;SKJ%۞-s1[| .\"ͼ8g֨\r4jW 77I3z1i ܛB俛7VRPQeM&JKkQET:a]IЬwA:RUoPvPf|c`)J09sH[G“2|4eȈJߚ\qXtW+n\eS[\7WjAyu;{-Ye=wؐ)o?s܄]U{!G2&woL~}nܔC=J*s/B>7*B>Zaԛʮ3:R,lנ ^Pzlv&**W:=:3+s-1THpq0 ɖb/+Z&OMZ*D{}8D%_auScF˄=s" ^}LApQGX 5}sMvIJ@J0 1c-."mFR'v0n;ĬW逰 ߜ 6@ԉ5cӦ! 3<Z ?|}lL3b 7( Q!P9!B  1 (2ۈNWbLFS 5[/P/ 8K(vYo[DR1;WG׃~A|DapA8mk,͜h"(I΁Ҭ3O9^VܩҼ#D'i$B8:Wf^8_xB4icbP"$c"QxD'΃+#?@x.%I.U$"R<VTY.+3&'I֌ɠCJ%l8+`0N5[{OVjRgB;C|K6EUޮh-X3IqE ^4/Q] 垤)&/:[~OD^Ajuq(B ЂgH%EVƹ6ĪaqW۠3rg\ -q>| p{# ,vݻ>JP7fۢhц6s,zCBDmK8B~m։R ڍA.;L-:@WKЏ7.jݧH ZXVYzO G,-:DzpݛqfQzHחW (ON f /#γ$׫@-@c%=QNzDb$$f2 O"IQ\Lh!D@0L7䯻)iOP=nOvH>)fvM.|IDI/-91Iˁ`E!T^ Q8Ḻ -9PX<0qvaaY!KN;NxP@D,hLbxIT O!i2IE(!j +ՉKM+j6֝3TkzJz3)<99vWPMg&Rּ*^Ooi3#2*{Ӄq\Zn-ɍ] {7.ߛvB ӂzzsHhKgv@_K{vMOY“!'/oozwsRwA?mg%4\xVF m9y^6%SQ?sLcՄW=Bډv(SM,Xnm~\ៜ6rpd;Y3$ .?V Ěȿ36 |_I7'~D\Z0BHuZ11;zpZet1yOr>0eLDbz `$zDp;Jp TӚl׌QItacX]օՅՅKT챬ASѯ>Iį(\7_7V'dpgYmr^'&&,]D6M@6՝q<wVnl*WGqIڨ@7pШ"0b:Txd$]5᣺l~.VZ[1D(-)8Iiʣ,eA#rPTs0!ĄNSr&dfVZI;=i ;P4$6XTҦ6XDJ3|- O%wt=Ҝ2~hGs?]MHMrS>RlJ3.> '$KbNYDSxrBfӌola,o p:$9=]~L8A?bS.ÓyR)Di(Tءh/Y`@MɹqA ! FWHP ٿ̌$/p1 =az[QCdh{T-q%ʘVE ,MB2 2``F`FKä`HB#A9LZ뻏-ƺc4L>bBܓn(pi-kwLu'w;N[9p{wkeEw]B8*/X79DQ"$4P82F= (I<DÃ"sI л(:4+iZK9 [Id25bBN,Z΢sǒ2+˪ekuIXwad AXjˬӋ\NjKQ\-s%ޖۏ4"H@8i!+# dR2NR +40v`hV66KC@7g= E 0]&/^\-Lheaqg M CYF7./fOez=%Xv{|mΊ1_\[\P@vNa{$\ӊsÅ.FcTг8< ړ:m`}wQ;$H7\E/SΨQcN>sJE2RI{&zb\*pRl<Fa M24lhCpkU^ΡTUY,itz<]OoW ("egn?*r;킣Yhxu3RR&T"or^j_c}"AA: )WAw7%huw\?8j,}+#:d[ƊJy᭣gS/> Cnv+uG9AqOa *?ƺv'to/o.xoϿto.)3O7B..Q¿ڣ^_jԫ}Ws#vx5MzB]ޫr{Ss) nquwr,H7fy;c'N?F1.3MR̩cB b:y7n,mm6>%[⫘-ébR<;eYNw`` ~Y1M!~6N#X1\4uMmI+S˜B)/@$@$éHM$G.|k;2E)_C2=w^!!C'('g{/4 &GQ@ 4ʹD@PlB`|{s.'cGFs{T"nEyrxmʠs [&J(+^(nC!qhF*D*3ƒqWB #As3S*&]LMZoG(KѿF M RkB8\yHz&0:l^C=qwv|_-quF4x{O`CM~aTLrZtNTYmآ#i&CGK|8$GZ72GG JxȞq:R%) A[\y\5XJTUH-Z,9ʚI-#J:AA %' MWm;#|6Nc1M #|연aW-_1͈"%1[Lp*D:@ "Zh6 '`\x4r$CD/w( o#;Ao_A3Mi4Ԥ6]|d_ e,%+g m7_&/T;WOJ}6QJ(\v6gzɍ_KR-l d7K^2䙝?ŖdK[mJgEVSd5._*א-R!lBdT(% x*yraLI@W<2cSȦ^X˵̄&mcB0TMLVŏ)AUU,FHe2NЙ EI6ˣ#eUƏ32:ؐS‘tIよ"C3Sof6+j4x(YЏ䆺K%CpHbKI-D2Yg" YIs+DU`:s%xNEoy6 scPQKOB#%9%Aq3B@oZK")E޳|!1eurqRv }F ~$ dmD֑l1.|v#Ggf*twrsPqԌ&E!nVY> s=HqHa0-z7ȷW˘z0q^u%aŔܼٲvu]1_dH}:(0+^nNRn=el }# D5qJ8,^{Vnf1qbz<\$^0_N07<^B; V/Gynahyy0u֫)tQ2a0&m Wdz~t A^˝X5Cr[>4ٖ-TTҼkZV5574Ӵ,11S‹0[[(Zfީx6%l +-EAeꝎ Z;JvFm~»DLlb֋y[Zŀm杺PiX"j}=}hGQnx gP^;J 'F\/s*WQQb*&֖$wep LLx-d1tӣ.hz4# +AnFIiǃ+&ل-$: ?M>9G?6O>;0Vә?{[o}n16iʒXzv2fe8gizӶg Czԝ=\>4caf#W"pD=p5RАЍ)79E`zxUuFԫx5%ޣ^}hJ]0]Z1kiL+i- *A@fv}5Q%H; o@8 }~įtm}(U;aճ*~8j*ҧ+QT/jVm ^70?yN7Zـg >etd\JcG+pJ[/D .ѧ0xǙcRZLB]7L*ŅG<JRe:C&`X"  vpY5rvmjSJyK[o|k{go%B%6=ycl܌棵i\ )5jhϚI.QasprO P'QG]K&Bse&kG ,(c}XTYTDU`CL )@L2'3AS +#jpvqa8Kϻ<|j֮S11}D0;^X\>p9;Ooy֊B]s }`sx/#ͼu{wjۦ^wkddXr;?R->=x~2λ~M<xQa9W8Xsy˦LӾ;6~0}%#ZhpɊ/;ŕ>oe@êe KҗI`5 kBdE"zww_ qq.DY =|ޟ.T8*+_{P^zN^.` O[W>$jz s2 sSj90rF6q 72dPBd5 EvU^+^^%"HnJZ \1K-x~n{v IkNR}5jU_ծzUBC16YzΡPQ#wmR>k&Y8%59+1JhUD "yL6gZzLJfFn͸.OՅ.T.7jy GuyluuOB{8]j(,n T1dUJh!##H.mC e DKJ![R^L(3dC2Xh9{ɀ[z+ %ؐiJ"Z^Cl9O[.[v< . HV 9(sD 3qj<֚g,#A22G_uziP3n(S+7yv΍xS g)3fqmjr$#cL.&dߵwp<+s']{?|HD3E&[ ݵP7,Պ\Э)ANkpCDv^R-mvOփyHu_bUpv[b.q /s+ܮ,~3k俽{`c$R"#zxˮaD0A0 bQ4e%G=sx9uu*Q75j׽R^uZv4p)#e`Eu,]LIAY`2ǨS͖N5Ŀ ˃8 vï?ǷoyDž}odu@30XOt ijI [--15z1krǸG;iaI70NNc <-M;9òi!I/[o,Tu*lԪKi@k]Dz>V9E]3/71[S$g鴝ey\@ vum1CWhdb)2VHN:D?r:%5J4u:B X{ΙI&&Qw&A#Jzث#KܴVH3'314OT !fktCyQ( U UejS9;['tvfIuV;V7]B}Ux+g=ckvko4ʆ֮(u>QVq&MР*4sʭʭ˭ʭ% N\1 eSt#Ҡde'CV&X7sޡȾ09{o:yYcOtK*e#^{⏳a6DG>]8БtV=7|!TGHүez)|9^SG=8zYp,X{!Y.R(GU.I gPAZ@:D4exe ]>3!(E}]9G*@cGf8jO18naírި3^ccxE/bpjB&=E3]Lf99J<t*[Ex+qhw: OW5d3N[%GG][s7+,lR#~qv6IUR:}Y 5E)"[944$%eLwF?maXl8;f\xt?_Ǒr<1nv{ 6Cw9=ݳ<rF8SŜ(J$%GfΆA.I%Ir"O2yt0KQ\8VgMO&0QAqJq5pk8 DR@מ~}d~E,ƳLrË{g(1NNO BsmMR5X# >P2l~Wʍ,ُseN'7.q6k8U/6\LO`L_^wqt{sYg47/&O6et|eў߷\H-.NGZ,ٲ7UoRo=p e\۫+~' g5qX8٭8 6w{kj'6''cefޢU1b>byUC{+VUGVunbheyua]WN^J NKji<7 %l\M7 $M;$+6_t;H&iFnt%dcz&fRx!̗- #/'ͭmɖ6Wߓv1-/h~㞇4JL⽅xwSy9+e23#d]m[MYvB::k@,gt@yf(jRw'b_^vLҴ[J \CBR"C !GI􎂤PUbV9(sP:K`HRbqn/3;a۪M>!a˜q!Kq&*[̧m@+|zOP/'/ytF"tEq&! I"61E`՚<&ρ1d 'u>LI+@jpTfB Ur)3ӟt[$)}m*@=Ƙa04tp*B7+ c&Mµ籣{ǟىxE=Pg& &\]G3T6Ok,o}h+_~}tvkD ߡ Q_ #)T[}{?F0xȬ%GPj2}5? zH{v:o/~^dW"+FyZbcܽァ0*9/m+.\ 0`1.@Oo_pA15mդQ&0iHGd T*cx[TX.*TY LmJNdTǎ^L g΂\3׫ٟkeYz>@Au/2oTʼQ7*Feިy2oTfS7._嫻|y2oTʼQ7*Feިyz,%Q7Y7*Feިy2TʼQ7:Xy2oty2oTŬY7*FUʼQ7*FeިV2oTʼQ7*FeިyʼQ7*Feިy2oTʼQST{R͸E?h"Rx s$irAƃqWB 4SVskR[E2=k„41XKA <$bUIx )̓6o@sJ>K,h`SFEL1V"r70~+6q Rf5$( RlԭrVY9uqБKVQR%dU^KVyɾDgJbaXXy*/Y%dBbS!+_0{`cd4YCh/gB4%E8!&H+-d&^ ॳ498Q@ed19Jx%Q!0U $}, /5P21C7D/(Gt뼷TQNH]rgW3U^ΎfK(=^\ 5ϯM֓xr} jϳli32*{gӶM?N#Ck8,w7 +Sl8дun47Zʮ;7Ci^wu1 g]Ws%(ٱ&wn/)A\]y܄Ot!+zn K[͟7g| 9v L.{uٷ a~X: jL:ؔ`ݒp8[}S%n-qC?#{ؠ |u$bLEQQAMh%pAZe".v,p"B\\="XBH6Lϒ"8hQo4#2z v.p<;C*.,{Yz/j3J-imA^Ke-!:P@-Z阘KXɝSPOq/DN1&I>U M1XϢhRNCTTqG.*m ge܌RBUj ^pT^dOU蓫I}g74n4:4N_!„VqB5F(=,AeL4WqH#q,'Ӑ?3βJ%Q35b%NMLX mJRvqɹ<[J:jj+$)!<ʴF&-,) Pe5џK26UEa5o$!#2A@bLPF$# Tv g=l(^6b}-.lu"0V@&)ZM>~ \&4w'l=P}մiiZhV3۔OTɞ(dDơY=NpN,:#ՀIƫ gzgUS:ņ ok=c-=*[Ċf&HCN]yUnF4Ab*^}"._Smpp%z's4B@l29J> pea|ht@8sEzܺ{mK1B&($zj4 A@O6ĂRf!9+KäHB ;sw)ZEa]l8;nX}}`y$Vp}˻ gBa絬.q]4KJ\共+<(\G1:aj@I①"4 $Ηd?hr jW*vh-y0L E2MqX@"2kj{]aG4gzu.WKZmU][+?sG/5Gxp)V#j?Ҵ") 3*W K4I8QJ-Db:D;QzvnU*}׊Õ5 l { ?0/:QZfћ19Sf<dM MY&.r f2׋!J{JO7' kdMpi#j)l<:'GZqNy8r dFF_5(Y) s!Ln.5z 'd>̹>d)4R~1;?!j7ç6R#\ ]kH泄32wիWG >T";;/Q.kG (x ׊zJ#3naFoʏ.',QDX̩<d6^REm%xsƋ '4"= y{O}ݰn, ;TF Y+xtMppX+#{׾gE$h@#c󏳋JIlqÎa *f dz˃0lo~C׶T_57b9/ ߥ_U{>L]Obmm_.?7osLv'o:]}0An OW?G% p[bCŅϻqgoclV//1[SYJJiưynݜ}c$g$<e@'`|vvwu;urJ)2p Bo FIQJt&㼓|0Sё>7yH.cCB)(\xj|c{i0hGˊ$_y%MF8bj%BT/FzUz\9%?7@ =h୍c!T#PrJ(]PGV9؝e[ۊiic|5IBL/GQKξɻ޵#ٿ" 67  LcgOAO[Yr$ىū+Yv%ʖHKQGU" p4^kfE4 uYǁ=ªV / GpR=2f[-k0S }ޔ23L.O:~jMnj:vy{p(B]ړu|2mQkg=i 4IcQ8/pZF6Lkހ%qֱ$IO pKT;>ػe6$!ZkG|bFHm9;`IQmH":k(QZEd{!ﮗ ؈TR$2AB)<#:jPjiǔT'Vx絘8w^[WIkLpf^nq'욛L'/p&W ns"Շ`MDP> hk'8Z3y=Aۿv]eT])8*GRpT zιRp,JQ)8RpT JQ)8*GRpT JQ)8*GA]RpT JQSV[iFU ,_)8*G=+Z)8*GHHK"2JQ)8*RpTH2DYPxR8.{ ;R"KVYNxQH$B eIDzxQ$BQb1J&ijb (w[ ( Dy &rLF2gqf ||.ȺMG7^;GZߢB#o 7LEݘ|֙af7Ty]N:Vvink!׊6MVk.mBε~ԓ()cK7Xm.Js8P[ߛmXM7[v]~Sm͕wt_[ԼTr|Fk꛿]umhK_PW3?;pwk,dxZ,-UCm풇]w!Gˑ[͟)6Dv܆Yh\;@nBj(HDgĄ5dߔ~όf`|> 6e߻|̌{]H}I $|}|<[maE?.DžqAprr>?#1I)&LKe)nRJ녑:(K $& (8ժa D=TR$HBS(t"8YEٸ&Dώ@hyӁлgǴYn/jUނ "*AWN iLB)gl5\)Bb6ImVMEO&HR9E'jCZ Ʊ`c)U1H*TMhJ[blQʳb3cW[( BQmAm-U^<}X-W'7?LVw@cA2};*4C*e;DeNiØ h8(`ĆXk!60ceɥ '$&Ac">=鎥lhˣAfDZXmYvEk“H!@BDထ42z*)璡 |^&QwZP!= mMKc!Bx =,&f{q1cǮQZjK+ 0l85I*ʃ py Ixc@q(x3D@TP*H\PD f"PL(@A& !s,PKVg38B8]W+6<7kK1aaqSc U<]arJD{O "F(M OqpDH*o+ hִAÊE6EWcT{'ydyGtq|yG >M]M`ƾԙ}huÜC}SPI pI (\@h4D'J'D:Y}J/04R16 &eFe5T"="lS?gKҘ 1HG>;!mv+EzZ,ç}Oc˩BQ8-9w>?1Gwd2S9*h(Xj#jk%NR^rJ<)}85A 8֦[^~} D|C1,~u\Q#g,R|d9.#/`1\ gBhTm07`e6TGUshvάNx4 ~ Fgg ù}0@3כ z%T=XP{x܎׋>8:>[y1o^n}|߆9_yjCE_5Y,ٲ7UoRo= eUA+~t9y'7/ru֪|j@żYW ]WϬXU)X)nN|3:Sz[UdWIGҰ6 8t{! y6.tW*Ewnrd-WKEK}`~  jX1s[n[en|Mu,RGA;\A[]y2,SޮƜ2MfˋU4ˢ:]xIg (/ EMᔷI@/Uk5xۭw\CD,'YrPt>a4Wr5Kү[- V5H!z]S'>gHXMF8lUDݦrb\]I/!1F J=>XD; D%4BhSxt ZA=pq<=] Bk] F 8M\v'BOc'?{ݪÏ ej:ogy^._4|15mujɰ}V7wƌ)gSeMOm=[x q-- r]}am[mvYy{KxoztV4gDC3G;Q<Q޽Ey0̮RB;8hOl& I5+he\h5ETHkFnjuK$0.xQqޜ|SޑT;-{Npy2CpX&s՛%);Z~bǔg ILGygbڱ8i}=Lm!*aG/F㛽A.# w?jRAGZFi% 4^\³:n; Po^8^GWBlˈq4YՖt5#Fɮ¨\˗("/FgQZ#+U%jϒ mFs{GŎ"dzm#6uuGġ.v%/c֚w.Vn|߳N^t˛y {h~$Rw<Y}hn~}cKui{s {<%,n/}|Y*Fi]9C"e`T<j[#Tbk{E<.W_$&)dpa`)nRJ녑:(KDIe^U4$Q#(-Gp7qtٻ޸rW<%@&Y$4aAEƠe [^ͯn-e շ.oթsHV$fM$9TZ/z1}~.5G[_^\8?]p;dz+viG]{>CQAa2Ts(^ωksR}Ց=ft [)-%׎|Qe=5l%fkړh4hdfqd||,OB? ݂_mMå˫ ph|To4Myo?_N7sĮ-(r)K[U3JSNPXYsq pEqg\5aOQRlm򶰉H،F\FoeqNr .  }[=ڰ+EUZBN,f#W(5kQWq#pl)8m TXSTqFI2Due}0s$=:0>DOED ₈""R_WgIYSakZǙTRbkr6IXܷFaG;=\0/y*.\\pYC^ AqOG~gݔp>>bRWjF9 3;l?'bT)׏گ7`v[~܎=ų~cuO+b|Ϧ'lug4F󓂸7V><0o°tӼ?svދ?yh~bl(eZI>LmV5Ė\׾#6ZUIܲ. R:͸wp{WEx2btJ #&!r$-.+<~Bm=BP:rB1 GF]F\T>ŻzLrU Ywڲ ^1!% 9[}| (Q;k,I{MCb)XUXCkTS}WuF\b +B@̾Ë2ku Yt3H!QۥjHƥ"vTFLx3U9c>Z Hc-'nŀE F++p-i RTGpFE< jq`yU/}(q.T +k f AXf @C==r51 C ÄQ!VK+ɶZ9%;s6Y@ռ_ fDi"}+޶ LN)0J8I9nXۡ#;^cո $uK2!R )~U6ՂBb|m4Rγq Ȍ4V@ eH v iD*3rP*P+F tf7}rng(&**f#u(lc5P߂N指<ΰ |vջ K}ykVȗG䳣'$h4M HǀWP\EY6Y|iAKW40w<@dxX_v ~ὅ<% 4I"eLbyMFA>0ِ3]72X=% 0/e Z->H!3xpGx!cx(`FULU]/30 Vɒ.:ŽTaۨ~cM1>FmSծ'q'sU8C,~ /)(  Ҙ{ إfI߃&s  衻%_ShxV%jA׀R")e`뙍%\͑Eh-v,JuX:t T(d$9cm'm! K&O N%yB4H,Uc'7uLUV}4VCYL0a( !DA>&~j )bl9فqj%?ϐY gFC5Z2eo Ԗbv}魨 K* j-P[A a@U?o? {0J-6ħycρ9ΛUuk5-nNs}W&zB@0u3$^]nJ`V`Cӿi [TNiOw*Uk6- Ĺh& .m2S^ FriCiFlbbѡ@!(!A/k \ 9P/ȍ?"ihP"eN `,$NКƂ)70Az] >;Boy xmKciB'W WNǪ@W((CcDI6U렣6 7?C砀\SPF%sJĮpcbOIyW 4ЬVV%0}ͤk@&c>v\L>$הzcB+̈́h?샩\qaKh؝J7VPp`(o0`Rp2q&zJ1}]HM tzVC%q= F '`8e߆J.`=|{vWH\=lU[p<=Y*n_Pgvytq|&yJ KA `i4X, KA `i4X, KA `i4X, KA `i4X, KA `i4X, KA `i4X, jU||<KyRVjem-\OI:%khWԛ_N/ߞ\;_yu?wmVu]7x<+L'7P"$})$ :k&rf圕md7l`{aoo?.7]gtKN #9ѫ{Gw_b0sx1y9X>FBIA)"HkjR;'u߇3.B2NtuzVS-k^6gD˓_E}͂%x`6'wNߙLy5o>#&W_d=5?}r ߧM^ D ltvbs |UKb ,OR5IKgšon/1\c@NSe7_kftɺ2Խ|WO>$QaMHm;K+ܵ󶋝gQ%߭9nn -RHJMcG}sΟ7?OF}c{tN߼ݎ-۱=(rxǷܳUjJ+Yw]^&F;{Cg/b 0}{&?_]zs{cZ8BnpuI,688`OkIڲOQ4Ir8͞GUWUׯ{7K֦P^zxӌG 5}>ȯ). vsX zC κN\v_~{Ûw?Qf޽ۛw:&{0 0cW۶^ ë9os 彪!7>0`ڠZ3tw}w7^[ũ9eJ:cI'3 l~=u´RUBuTX:a@_N\sjcectWXc'W %zPZ4^n]w|RZي:U rPRVL}y5 t1,<8L<e#FgЍ~ n3mISIZL"Ql8u%D,uT nR3vESm/t~&Ui'k{}po0 \~.[mY˃ vk8ްN kslɺ5؂kI '2 XմK`),%DR"XJK`),%DR"XJK`),%DR"XJK`),%DR"XJK`),%DR"XJK`),%DR"XJ1fC[v>B$39@ :P4B۟#icQ7F-i*Y+ 'Z*@۲%LQ_ʽ>k?zy"ı? r>ޙRİ aG ;bÎvİ#1aG ;bÎvİ#1aG ;bÎvİ#1aG ;bÎvİ#1aG ;bÎvİ#1aG ;bÎvİ#1쏊a1yNv^ar ? ;/*c؁J1h{SF%xE/Z@G-VU^V+<hHXb]pP/Z:@M#; =*a<(鞄NBr*l$92Am.RNx \;mD}yU p/!W{v=\_>|f8F3o#w}55 ˓j}ۧßٷRߊ>5|!X#a%F 5G9bkXsĚ#5G9bkXsĚ#5G9bkXsĚ#5G9bkXsĚ#5G9bkXsĚ#5G9bkXsĚ?2A4~]Bp.a`!8:50c sK?h,G@ڥ * Chs-؃'cb<(*bL dب `~|cEU’뻢۠O@:挭38Kw(R̈́'kQO0E/,}l5C@3F D6RrqXr/ux8K+* nlOPƃlzΏMU!6v;Y2L-]OOݻ܌>atlg"76i\n\137w]z}oF/ r1S8@ Z/ItG[:ٜڡgZVвݻ4z^4X١祖p0is3>,_vC>U}QkƠє@qlPI~I>?ip^5uյrh]ߺv enoz}z]p=m0Iv?kDy߭M馭{뚞M>랼mZfI! ߜ$ߴ |SxdU\4U4=,\2,ݭf-h='1}A?5 l7>}He3_7rDD<cXȡ |uqdBxbX8'SÖj)G6:RV~;݉lmC6< Pʝ[q; qWy%(Ց9e{&H惂T[E6{hKQ&#$p 2ު,|"Nl[%[Z#g@tJG~ֳ-WNЛ%yZY_nKEnR?VoskZY2touhё#YV~-(G)KAV(PU_ʈ*GU:)I ԌϬRB,p3kHb1I :qnL_נ诞-y%ۂEiF `NZ!PX*k] ωqȑ 4za#F%ʭȼ0T{J hj.0nbX9k[odH8j@;j@TUԫ*5j@ϰ6GshG^9bjWj2K^e|.[ȜVRKoJJ@U6y_9{45|7!)ws'juHE%%2)pY:ҥwY;7&˼'TIn<ǽC-PU۫Y/`Eb} ;WC߂Xy6Lr-໫I!~6B5_qOzƇfLz-û>O~YԂ2[z)iUi[6Ew82=ej5KeuRVp~մËϫP0?JrNTWw9fDx)3:唙e3\LsBLKfa 2CҙmfNRG? .%?3i922 $%hë`]TN[HI"i%Y0EF%9ۯ"BΆf/ǹߢhBI7H#,hqBfw+Gw C&Q K[+EQ;|#5a!S=|U^!ʃaeԚRHBê ;\&VSؠQ[%Q+c*rpJDYI6 *-AZtd6\Qg2&([bQM-&|iL6iَ52q"Nj8ƨ.3o.7&GħH 3:;-KT;oV:-*zcT/pZ<&9q1RNs2ĭD!$ʂhωp" 0Uqkϸ"m 6i[;#%\WE 3{0IxaF``HEpWkQe0Zb='k )WRI :u>/(b'HBP| 3{xbI bO)R^.%eKue)},(hY !#zj1ri m3Fv$tuGB%aEU%C??|q)ת'T1b*8?Xh .2u.ЅZaO=PcM,UvBg# lP ;uqU"m((C-#XS{\q'\U\IGW{RI+FvW(Yz%U!ؐW@.#\U*TJ+؝T,%)]/t+W|:o|un2hcCdy/o;x4l`qߡWܼb&J-}m* R.* ֜*%e(2fH\JW\s6 UrI@q|ĕq/g$gڏ\Kȹ+a8PI5잣7hYz&dY߾/M篿Ï߼nΦZk̶^_z!4XK0VVFy^In%l3ɞZ΅RFOwcWV%"Pɼ D$tLkŅAeDGh6B)ݙ›͋Nu,^Lä_8`B@Yv̞3J"2V9-,GQM߿~"KA\Ql%&le_&MH,Rki:h4%eJqLJxŢ#A,ɲΒ֝OzM4ƶ'p?`/&?*'ǐՅw'xT]\<;M}}wYB *ړ<pD=GGC]wهF赫W@eWem\\pTs_..Te]#ֺfi@oq\wCi GcM:OZ0WFELlꘘcz;Q +Eo7%|'t.9l0_ƒtU[a u8jG"}<3f) A*۔/lu4/8`0Tg-(OU?PD2}:PP Q ~@ }I3f ,fᔶ^/XNSVX1)-PF&Kv:ݑV4jc:Ѥ6>xUg^L6%8礪lϹL%0V04_$t"A'>%vg9וi<;TS UPy5DӡvuM#:7P;!썗L&&"ˌ;.Mf9X(Q.fcQeR zDb eHч8U,%g29q JT&2lWC[ '9&HtدB蛠܋p vw&zMʭMΖ-]Vݻ]L>B)ۙȍգH G[?Ctj]QKjzs4zK7m6CA?lͦCjYQjƭwiz|x{ z^hƣцw7>.V([x 79y]TmitYA ̴C-6"[ɳ|KǚLd~Eِ|R -cue}u8ʂMFrl|nN.jgoyŲbYHHDGs1x>@?O<ecNA  |TNQ*0ȐC YhFdɭi5 EvU~WЋ#B 1B*i-pT 0Fvf;P:\+O}[;+5|GlgWުVޞ D!,q Fi(c;i6) uJzz r&*AV .1ٜLk9W3)5ckm#Jy]ghL lL)9TR&cPY%ObYieb$ AHQԦ&j.JY8G$cfض"gaZ0.Ǣu;s&x\O,e$ 42NYni\;d"18LȐYQ!]E-Pب 8 >dTe}9acԗ\t~¢E#Ոeh:iM/t Z&@"9$"ò hnqYvikW#AȘ.#iT p \LH!Z cRs#і^p(IH795~acu%Ջeh;ţVYWzrBbtX}fq|' S`n=qV?kZGe̗Z4,+UPEw҃@)y@:3x8ǥ;I`ɠ(,Hb,N#pGdonaYp۲}mC*Ic̃*yxPvJ0LZ涍Ѷَ'gzH09Y7EI(xzed8$|d)6Ыڌ^m9UÙ)PKVr^ϖJZ {_y TYm}P6`Ѫ#/G{dnN,p!sx쳚?DY^\`NHEa1<"Z10cdEWjR7|Ud;LE)F^y-K=KZ|"*0 9"8 `KkM 5Y Zmn/UxPW+,u0FԆdc9:RΚR9􈨥*`@o:x}c&{#hqGQ jvNobOoC {ooN߽V? L0-H$8#3+vm M͇vZ6z6Ọ|5oŧ1hfmmH/nױи6?ev՜aK$7OPUcSB,au~n1o)Fx2H񗫘jsvN2<lݝucgZX̩Ĭ*~.FwlNqAk;5M5Y))sfI杕I(b茺C{GjD1yH3Gm314M49 !fktdCyQ( U &Tˋc9mO<k5MJЭ#_j% [=MֹW%\P)uJ6*_?`N/#bc|8](jΏ- ECf(+FR59786`ū)z M5]( [%-oDwΚ.m:$L_rwz4anUӯFoٝ_c/j~7=N\Tzo~7t9 'eR{=jdt#n].7^GW%> I*Kk{VW/B f`e&[Ҽ$J-rOk1 5pĞFH ԛ7Jg,Q[R2R EBEFLi%RǔODXAA9""D+/|fd32@1E)c[#g-G:qv7fhy-_NΦm!A`8SRq䆺K%LpHbKI] i9:YJ*k Zt\@8yQTֻl=7F$T9HFv g}r<&y]ͽg+_r[tβ|"I] [yGH0Kڈ #;RY)ic\rgGɎT]n_HnR\n?ôzv6ڷ]R B eg6h2}N>kiPInuꘀc;jNjɼfc1Z .5}.e{p9ji}KR'KBAxƃ=ᢄiN#E#^q: 9 3$ 8/, ,'j"8 r[&]KV6 |l}c(,@YQޔwzUgeQbs75j6a"Q!I,6{2Xkp@xX QsY1NXy<,}5mz:iE2(UcHxP r*$ZbiEŝOq|b>C0j̱x:J<xH`IrZX;B'GV:':oMꏰR }#b? m#@Qۖ]?l' "[Km("FwU]0Nq552Ji"ZRHZDBsZĨ3h‘ǜCґg۩&1;&Gi`CW[_:E؋>{ ;nYJ%nOLQ^Jzx4~DRewMu\ʪ#(W/B\TpUJ"}!D8Ki`ߢ<$&=vSӣ(UL^MN2xQ ! e,8ƵY#s1?7 Lĥn_: v9J8M*p~ӫo?Ɠ wGln,+Pk:>5ߏʖ1v 3Bl qn[a3A=#C['բ ׫PgRVŪuW >6c _%'h*7YmlKDj6݌1o",/ZЯFg&I!!$MeB2Y%,PE9DN ~B0yeh6Xrm0f&f5&ObsS-J)WfWح99&fSZ3PH=!uE(F L(BLb,>]E~_\=wLj~Ze73|@+co~!efJ*i@HѲeHx*+ T 'BXghvkY9UZqL·Fr 1h%M`D=VwER\H|`J^ePQwX F&s~0$ ]_HDgͪ]??iX7wG1 b:d'/&W|W\EZ]_Ԯ5wN> A~1?[r !{_y TYm}P6`aTa:⦳g;]g?T8N!Wļ 2ם.x6)K[b$ǐ]TYe1|C{qޟ=͉Ѓ^ˍ%-wʃu3HXb4>aZ0ϥn,he^+'.K0;Q2G0Vh,:RΚrpd*3->p;ψ)?YwLdYlN+1CFyqdst2KE(3-E)A?eA]s Ҍ&rmPA[eEPJH%W  Lf`Vpy%F />QjZ5WN?ʎ)^f"ib>5Kn_d *ziys64 ⢈ZܖJ1v<*l_oEnPnnϣ^P/lNUSZQb Flo@} vO69CWhdb)2VHF_6nOo$uuK Zk tHIl%3JT2P3 QuO;U=2[3ic6% !JzAޡtX 6[s $EBr)^Lޖ2>(ˈIn*&C=Ѹ'Пqvb1*SW8_ eVI)C"ҕ)Wg}`җ7ys >I'[lK`DN $CFQ -f6!fSJ>3On~T[-k 7yJcLOڡX[o?mȶOGp<Ϊ-> r9 aUH#lDd:Mv:&ߏpz/Oov7yIHͬfrw|IΣyH%2:-6.'k48Q9mGx#C.wf`DܚVqPpM"*[I+^~KE!м1B*i-pT 0[ފkM(N9B8MՅ^7C/.WV h.1&Ksh11ڤ,|)vN=kME[S!0<-6Ѭz0hHs>j[3&ѷgх}unY^>.q3,;qzv5ls7T~4| g/\c121dRHPM0.c Ke0.-yk+I8U.Vfn(OEmJm!!$uYCp$2foXؖ8k053{#ح raTJ<,4 8 di1r fȞˎW;ު>̊@2!CdEGtMB@a&X !eX$-yXF}?W#55bxSa)hLQZs4fgiK] BtI]ֆd@gB 6F (BlT5qֈϐ^u 8[::uiQ^t-E^/n>% <*MPF+Rid-'KP;&+ɋ&cŮFGWO> kE0m^q;3CZ}wzӫD_?fZ`ID]ͮMt*  McU MbM'bqg#ل6X2, @01^1LH^'K^u$YEI̓f=+ 347a.xUCR!/y#]yg/3uŐ.A5f ?)&n~WeaR5&<Ӽ%LdARg*|C52{qtؠ eZ ҽfћqVWe ݻu mwݮ׃Kj׋;? /jywgQ4×MA&7gogK>h\M.+Q/߇%׫B>> b BiiB\9omTE8ݓ Eot‡OŇO͇OƇ08 [ĤaM94ٻFn$Wr 6m6~K{Kr[u,i$'+Z,KLْݓLfbQ"YU|v s}"!I.fǽ ƗsNגk \r63#X: Z@ʕeїzpuQ1R .e '\Ȭt`,ӑ;NxG`Gc\: 7zʅC4h]pNQ<~rwA¼7sDC=ɇj߁-ګ(]le`JE@_c9\d΁'ٶՌz-H6pG1<[?zaS9goq"P'AGND "w$poIưYzo;>a]G PPEе ?BBx4z~Ps +w^FUT0g )^8\d`vh SOt ~7ЃoB#ۋc mAd{` u5 F1p\}k](dp zY7G2z]nB<YgDZA( ̘9,.AhVS?RJPJzf/C鍯P@pVFĠ[5qs7?s37q9bjsos*Vp:I S2`He+/-wkoݚ;8 ìQޥ\GP^l>-T-smZ↹뇓#VG:aK"hai) <1 y`NE]wqp*-@ L!|t.?E=#}k g*4<@s9Ӌqae t蔯B g@يۙZ4Ev(#0~mmZCޘWq0B75gWwtcp.}'5.+d86R [й۠]<ٯگwN_e_Fu\'M=˓XX6Y} }{ɒ~)Kcgj?Qx.7'ﴟ-5$ 8dĹMfi:,&$迵8:!\ȿo0Fx_ #owA"sn0,M(B2sǭ)wGqJju8o1ގwd,.̗ 0Ξi0GEͧJ6q\G~\t)'?E{n08N;+\Qi+qO4uD[-xUDB|dZNA0o>2A)$ 0,1PZĩ3,%ɽA&TP]e$;ʬ*~*KvwWo]yN g›g›=6t}A-+fy'Lor}y}avT-]N̼C{=TQMF*Gvrg5S`?l HhN 4)XхuIF5C">W$$5Q"Nywe,ܕ=^b?ն~7竸vXs Y=8M{0r C] \{Af]K\_Ǔ/iVo=E{Z_klNwX>G2PYok-rvܲ։CNH0PHkhχvy2ꔞ]t-u)t2D{(@zXêVXg=ښpΌ=zH$hjy)f2NYY*?YFVLk^%rֱ$80=z ;# x3z#`]#; r+ Z;3BjN $F!謡Di}E -sa=76B81 L/ ψ 3%tYbⴗBTWHVb2teP\1 V`o=y!cg*} : F4@2 cI9[`_goq-yN訄|p|ۗ%gl{)Y#@Dv* "*)QD,ubRqHBo|R:$\(U6F)D;MM.x뜡"!(OڄД[#T8 5.ߡƋpl;iJO_޲|j;*UC-EPsK6lK.ܽtQ:E7+|?E Ku~ 1Rӹul]x;gAֹ]v>q{M<\ZׇxwO 7zA'Ԯ̳鼥`dVGocԙXj;_E}{5Ǯ˓U?])<{ƅ"Y:spsIa=-qyKS){枲MȩQv4擡R?NAOKS.|c'rtt>q8.?P#1I)&LKe)nRJ녑:(KD4S& (8ժF#h( 2O}F\k҉"6bj0߰!,s'θmrϢ'ȶpփ!f:nG܉;6{Y~.ꕿk0Q!PJZҘR(#5\)Bb6ImVMHAƝHR9SYIhU ]Ok2ǂAV fIiJ1_XL3//aF0%-5,*,HA<}P+c.Tqp>)EK)MV-j8 I,XݢhU{!]Pguh0uMYrJ1-w g4w똥Kx]zNfzegRR?n?Դ"F SgG:@mBk!\[J"CY 顊t}n{ lQXLzÝx/i"xOl)cFU8F$?XC)Tar#eY=zq}eq^{ʤk)yxt} ,_8(ST+ypx'< ɔ\5 W4r$#c*u2_e4"Ԙz7Jڐ{zy}%zru|~e)M`r*EM.Ѵ Gs+N%%*{k4/ݛ/ˌg\›Upv]ÔKaN0'oWh >yޜs8|khHcKJiS37|c3)bᨶQԃL>.=lڟ6?[lsljY(:);БҰ8'drͶ<1TpRCc⟳^쟝p?;o(߽>׏=zû#.{f?QL1[M"`~{\ko~CӮEMKS;|vMuR}jBۏ˹V )~>zOb9xpewʦ9ò䇩!IOgLI/ΫU4cOSmB0@Ӻe7n{wlvWݧ̞4J6g鍅2Xq(Cr#f6Zلl_v6&uoY߽"w+j];VWZ݀;NHW/TB=Xi}@h; r[l5 6D7Y$y{ Yp:I5:gDgo;tD2gK9jichaJ}Ge%tQ1P ,EhxZ!Y.4d-iCP*PDVeibJ$Sf'QhA8V F{3 %V+h&lxO>EO E-Ӱ K|%zpkOLć|-)!nc[$FΘdF!U !z30ȥӹSVw޾B344Ĵ_-r_g ଡ5s0VSyxBs\#.\'+>NFN:4U4 |Ds$4HsΎƨ=#.hڣ1j/ƄD kyf1ɬ-QSP^җFfB31s2Gc% LI%LQQyX|yLdP]5q+"_. GHOs!?v=YA 1}"C%T]*ӈa)IKbA"MdH6SEuD0AǹqsRy@r@2D=VT9&|#x<Ms ޳կ?v[8,\r30Ko:}V+ۋeNNOZJ\JU6VT Hd%$ǸGG.۝#3-3N}^0gD}`A0&Jb9lEc$Pq0m'Wx֓>{FXǠ6mLhz r+B;Q76٦|ʧq`O{.H>h0J7I&(K*!t΄LFKʜ!]sa".Mhy{/7oT[)+Sgր[4urmWW?{7~>X1$jz7tcQ^94Ip!e6YWkAK̸+q6Tn7(p[{2fk7ߪ<*y4& ޣR~8SvnP>hi|1+/t孟;}3X ˈïd&c|Ivܹ 4 ) gdIBd$YY=O;/^ΐ݋؈.Ԩ.i 4Gnyu4\v6u[[w9i(an.e٥ӯɵ7ί]4xQELzo~7۔:ϲf2{=*{5rcGw1:/E-ѥ%%0h?QN,UqBE`*UʋP^H !ඣ].NƩ`rj9|Ly$)TC )$e6Jh5`.RZ(IJ֡dqYqU6J(X;eqae2Қ8ﻐ,7l$npTe}쭊//l>@w4gD:1b- 7C+.ռÆՌ6=~Pҹ(+ :Veœ!2C ?wΖ-8=g8{wLLyf,=5%8kdw%,_A# 7,107͇W:\۷;asؐ F)xiYJ[i[v,%wH,8 Q 5"VȫF>hmœ0zڣaQ!>*W nGZ~,WGgOmZ 1AxGU:̈́_͆Fh_hW?0NovxJ)8ZkG,1@zu/q u(H/zYzuOG,|uLGJ{Iu$IE djj;[YK0' "G*W8XA'L/P+1;OF=dIMa$!ZkG|bFHm9i0(6$5(B=᠙hPO.e 빱TR$2AB^O51;SRXA=x`,ԚgθX[/JWQrO4\cQ1+%$K#INWWCzk F4@2cI9Ȝ:`id[u>u{H K ]rƒGJe&j(K:hGG\Jo|R:$\hTmR$v]9C""!(OڄҔ[#TɆ`cp [ss߾dzM8&xXV!n{Τ\é}EK+oY,2G#VifE8fIz> ojVJ϶Z9b'YmN:I'5zovUvQ9yqHw6Km*A|7yo>MEʷ/?)?L' -nA}[mdl}WqQ{=wm/WpNq4VB5"̰k9;d+}rm#`Vo|}܆dj݈;g& ,z4" '#&,u2MvFf3}/,͛ H(UF F{ VkMXd\s8$!QKHo7nYKFюpnρIJ1dZ*KpV X/ԁF.@Y]&0ebS0bG@2>-Gp8Hk T:RDjcp|Z,Nϵ~[=K?Gr{t ˫n\˗ [6}Sύ%LThh(^ҘR(#59 $Zi4EX~'RBrØkU 'z@ Ʊ`c)U1H*Y`%$183csZ1.lg˅a.=>)͙]geaH-˛Fʯ_(h4<R)9C*6Gޡ9%(wJAЂղh&jƞ pg6T^3AVDD09tӞMydVL(ئzG[X[+M=>%'2%BB-7idTRSh%C5y|ZiA-dTЊ<*,2% G(D ꨬm8VF8CFlˈaFT=#3(z@r9 &R,:rfiT BasL!!2$NM򠀥3.8II{(n^j8#~y\x.i^//yQϹҎz 5QbBI! #ڀN`D3)r Tf&y)x-|Xh  (ANlUnz#y?JFt'aM6R%a$0/)[U8jeDGE(QTwzBh1j8ԑq;Ḷ\2 IG d Tx-Rhhdqq,DK VH tSps^ ]^-4JyǛ|RnZZ5WV<œtPbz+`VELWW$+!ҫ%-0O_z$k ˁ1Zj[(#:X6ܫt˞Px{s^q$zel1RZ(cNڳ jEwx\@J$gm"B31",8(8Nj#/ȄQ8 Mrucp<]K~}nd ZW8[.owiYl8m4׿^}xR=Ut+x=c29$P@^H.99Ws-xF7' I$L1WK-`JUDgc>,'JLJR-o?gIe0A!3oW"y-a;]o^D ,7Qyz \PC#J/6]\qgR9>i~=/3_'\.[g_ ] i4<떛ۺ]Y(®6zVxwa[ dMNH"kspNak1fY>@ q<8 k0b'͢{wg_fx6 0v0_ E6Me-ȺETM" ڡ_έ?oϧmnOt5}8 +eyX< ?ֆ h uU6!QQk,IJJrTf2\<.ucMgmEx #њ F.OmydrwD?)lNP2rjXuA$A9GI>k9zWG\TC6,E]KlvĶMQYŵ ={ppdeCCGك_D Q.4epH0XYiIupH lDstJ R!xgl1^ҖxWEV)F6'tP)ME$TUM!8`C *A9EJj sqLi@ˎg_M#l nso'W1B}<_ʸsZF.(?EfB8}.Yi &9F1 f)y8B%iؑ~BEpn[,|  \xc#Oy}X'tgz.3KsI~ (2I<6pN FJ.("%((Bh\͢(.&cJ )LETn(ւ UkzNkPH孪\6 R?63H!80 ۘp$<G}vy56m/of=>_l}1}2-WLI-5w=ءbqST66 H)쥴ٛ0BYR SD;U 97Wu٥Xj/w@S%YFg9 [2ʹ],.>74Oi(֩31y!U(ty"izsN$_a2WyOF%<L %@c()]1zIk Ѷ>\5+\JcݏL m,?^^ogЗ'KF|햄5?Ttu15_Kh5J&]ْނPd`K;2e#I"BnC / .YK¬I ]x|F[ ('6K4H4f !q١H,ru׶\b5k CfeB+I,ϫmbt!2}1B[k聧M_]zvyZ䩳*K*Hu.ǚ |=A"(oJF4ޔ4buNG&^tq‘{}"p=C ]dH<< ?eig}.&Wy/6CF(j&t@.XlZB âxxYNE> DQEFsVh XqϥPQaUFc\O˕ȴN'Awmq/^WeϳɝmڜaOlzh=`3k쯝]tw{ՠug&M> .u{$:PI9ƪ[ .UT0 |ppY`p T#Z௎//gc6 Iib֊3 )Q uluJY` *I=qbٺ&uF{ (~3Lv!\ʕC6&ʢt"DZ$ 2)QEQO&ˇaTG%-~zml GƼ CƮ{~U%;gV&}5w+WkW+i´“%n]̧y3OLjzkXҀJ`_)9d e(@PQA>%$A{J?)qgzv`Y6SolC)g-G@2k Rf.g i.i'#ѓoQ8C$ftL~y wy<{w*a=%ݻ۟F^>]fѯ>_g;cv~lZ?]W)v=&=:~_‚fím Ž[vU\7v#FL_z}4[-H= wZջt"?( D4@P4v0A8y*GQT gT C)E*;w m麋6F*MڇD110Q(IzDurr ]Z43 GV~#Bb+ϽTqzU_q;+y|t2CzXWxa5CT᲏z0hP"\ R( cg7r 6g/v5wٞ F*~Y0;kL ¶YE)Q8l!v˘()gFFnN֨񜒭"20^ tq["GUK&4-竸<p_QK7Uo7K@!H6mKV(V2@ㄊEzG",EmP$;(48~j } ȼN)+BJ˒[mz.=.!.!.!H1Rw}Gwt}Gwt}GwU)]מ;;u}Gw Ӂ @(3 =@(3 =@(!f;ʂzE`ݫI\5굤S:9tJN'Ll)<]:4 f<"LFAua=eya'gd$j-!"`W ;g~RU_ʳt90`~7dYRQJ hfĠU$t I eO=go@ܖ.T:xkJ;|hƜ Rmg&׵-0vaә9o?u| h@jw%~cIO3fH"ŠTЄ(IIC*&SR8t9&cUQh` g֛̬ߍBcXXyK{o|}d:CwaGf  K$ ء Aykr\V%.9{k{5/|5o=mWc|V_]}@)lX ?C޳knnS Weebt88 ѡmֹ qsG!(c.f1\DR4!!AN ,F`*1ԥNNʮ(쏲=e7@8aMґ =*+%9Zx%%i Y_T18Fi$rN`l9rai;ܩzH+, O<$;8QObvqȍ|ڰݴُO@Qpf 0!zGX3ȇl9RQXF`=J:ֈTDMm_W)=9bD6xR)䄥@p9Yiu`R#c3s 4zccm M'Vs?Wd|IGN@/-tW0οN'7s`L)-sPM #12.C4l(R[,˔7C%{!)V5lj1V}Q3x693 vZ)ʜ泴E1G`[DmۣvGz|) Or$H%3C6Gie oa fȌ %0:;5)hUV-GQFTLIuv!49浻\X"b؈GD#bzI*ɨA*,ٻUO1kdv`%|G6"2!oـX'urJ3("012VtpjBֹs8"~=b\\g^ظEqŽYCѸ2&*8kL4jc13$R`&4)k-k$qqq,YK<􏋇'a ݯbMԏ\Jx0Y0&Z%ap,iwB2?@ n aJzum;HFEьnE7 ! 12cD)}^CU[]d]qR%.)ykm g"G)*e40iGsEjD: zm8B^Zo)Yd ;lg tol=Sʽ38WaFhNPƦtS K rS /*6՟Yp8xUnʍk(U)+a}%&+_^g$tbZDxUC\ +!b 23|29"2<M1qt$i:?3#xfd +RRg2-V9ǸFx$NQ΂s 9i)hqVg`ռ5,# 3-F H`c|0ӂ `L2Q"*::rbP+u;"vHitL̏NF08Qi0ZDhpH#Q,~=yWXM,FJDq8`Ee42"Vl@錄:kz⡼'g;t;ty6,(|`Ai5|a&/{?ztnڋ][gi9K9Zi~6B䌂}9  R=R4-8jӂ# Y0=%mULK." a#*,D+Pk}KpރLy)SDc."!&"8 Q ƑQc &{tu PԽiUz5F[qcMúP1̀|/j ڥs#x#$iОh 2 N'虲)0<ۈ84eo =ha1x,gx9Km<C@n9FL6>&\e& a-2ց"[灲&Jw.(ۡll'Z ppD!L|E2C)&U4$(Aq">"(~$Q@Ք=Q&R/5eDDL )OQ4K!u1qּ2ѷuK8m#a>wnJ!]tj}e;:|LwuUq%TS?0#aZRD08C6rʅGa`1L-Xhm 4wFm6*GT KIwPPks/Y@0 ?[zٙ|^RS)ֳ|"ϒMo>wͧ¤m].#j($əG KsNul#yJv(La=f96ik1XdBX0c4jL& *uN`)nJ4McͥWX,J7LXEǮ>|VV{W;G^|Շ/+/Ė plTYSiL,Z0˜o*7!(-1IA\5b\.qkZA1nc6(a6Dj))1-x<:N5 Qmѵ!htπA,|&#Wz3$ףV(2ңF"x eTF =$J*QP_׈5} "oT׊i^\@梮m5X۳*N?|-H,f4$zIɭfT/7U=ZV}utջ۹+pՋ=sժuDeû[%JqJ>27om\@ {WWEDcV>^K YW9Lr)xk.Lkko/T ȇviq]pEsJv0 {Heۙ#i^3 t"~t J?-M贺v(Qli5V޹IX$ӭj,ug2(S޲ƴFD *yanW4TeE;ALyI&v[cml+GugK)@ ْ#-!pK%,$zp؍)oJ -5D# Q/3_m8cs\QB*Mg"Drt[58pfRDbm4]cmn^c⬑f{ d|5(2cRq+C*;)~ƝzqW{maڥlw.ݶUw=ﶭBxit4R~} 9`_s`hn)1H-h[pԂGH-E S,VZŴ"rv<=V}  I\6t<P`.MIL0—i7Bq)K\#$iОh 2 N'虲)RXl-3qhNCM"E;,Tl"@`6I\7׵bkR43翢cʬ{M%U6)C zMH[!) FlIzz04ɤjygsBv^{ (]9:ζ y" "滈l?$rl~~\9bO^KHi$(fTbhP .L~ʩRؔTDۮ(v 0%X4yy NQAg $\8CcLlvSK͟D{_P=lT?5B_%cQsԛ|bӑ?-=# dd<T1H常D,V[G"-"(1kMJ2H JFR69\J~+kuǠ=od%*bI1Fsс4F: rsFKLy(Mߟf7֌|:ZW~ygztɘi#]B2m$IF0E"p#l/UM,2yxj3^yZVg. Z{&3."LR[7e5(y{ukсuzyilBK Б4APL6F !+sqpê?a8C""ՠH{'`Z-n7Վ1벌_)hÃmnV65}|P}z#C)6Y5@I(w=@#frOnp"O^3ru a)wlЎⷋ(W_/iq\>ح~|e˼yC^2]^^ف\eep);P؜77R} `u҃:Q;B:J_}ʓj_6"@q<8Nr^^%kZ2(G^benHGI| E+}1CQ)}T|xyryjDjm״Js/:30KfϬ&zh&vΧ͗d?|w5;CBϬR_ujڜuؼHqw#eo&\~sm4vsgNHN*"8=O 7]_xQ=e@ԤONPR91(61txDo.t9OxI2doկWws>sS"dayY r IRD*<)jAeR4m^T>t˼rB,6E_^08 q}՘_/Zq+څbW\x;u}l4$cC> `ؒ X]t&ECWU1} K[ $S=Qzv A:B.Uҹ;6@N&BS{G eӥ0+!!ׅ!s)TtWsy -|E]*/x ; 6uu vXl1_7:`FW FTOՓ~g6O+qف>e_B+r,0 $,m$J 4R$YR@r! IQ3+g`Uļ0 RpT2;,ymn։cܡMUwJ/i6p ǗzNQ[84 P ? =+sBs^Td\.2eۄuqI[J[bʪ3ZP5]PhsP '>R{[sY\1`~ n-=g5̃#A! hfFO)@NiJ&.WF)+=?wܖ.}.xgM<޿N bfg/nVؖx\ܶ M /;"vD%MKNIoNЋ/u'D jc&(NHA >'Ne6SdIdzͪoio bJlFBO/y%c 92×79JLgQgb]Tt ͫKy9gʭ#ϭ0˭#ʭȭ`dWУl)K EPۣ@Q(eQĚÚCRm"@3lQ 9S4RGJc("u!`Q:GfyL]&upl ;lo^_i}jE?BZ}uփIkn77x5҃q(WVծ W!_X Jxoֳ5+;8]'#s~{],=\sycGb^K bS"U8vzF;c7 ;t{ G?%xPY.s+qsJH=ez3auIdtRЙEb,n(I7FDٕe?I ôh2 gIj`Ii rҠ]ѦdD!&ޜ1-_39:/ǞE(+G)p:L0lp@2kk\-Ev PTr]:w%%% nwY4%~SSkHאFPEJ&b ޕ$ٿRЗ> YhwcyJ\S$MR7$ţ$UYeز*f0kܧRȸiL\O S1<1&k cH)(Fbdlx2:}ϞV`fEZqkB:̥CRpK: g<͢0&qZL6,_ZqKU9L~nivV܌]k]r٪rv@L+7eٷT/vQSmHqvi+PEM@"51jHe5ZjV'DÂS/|8;zw՛V7+*k%hsFVZV Z'iG#e$Uae8v W̒GlXny12 F0ۏߞy6{wsLٟ?{ Ο|6U9¯WCպyT-V:|zEM Z7Kmn汁OtZ>XHAf3? 1*s&Q]Teo+_Pb?E/6mo{ouWgKT>1(NQh"\}6?.2K@V;GJQY?6џ:iB@&F@ 9b1+hf2lg=wdy!4C2iY0C 0@#-aĬLb"Z ^3JQ&T-P05Dmsb[u%߾[Q ZWZit ZvVoҥya^]ikRKLrJ͙"ǜ M1ٝ6_Ho7vRcwLyueWNz&M sb\G|So,ȣy2t8W'_޿9 ~Ν$h4F\,˔,X#nY蝯P8NK臙rj |w ,xNBgɪ-xײ|gVfllisQW–ΞRyW0Ӭ<ߙ^|OY M"^uX/lg0xu4~G{5RVB\BWVtJ(YGW]-?;]ǖj7T?/]V<]Rf߁xGW=& 6UZ]%-0A*$HWd s[CW .m ]%׮J;ztE5St1+ Tg1ܧBz~i̾s Qҧz,ЌpŖ޿ #zM>3XD\)Y¡4썙s.̱|:z{]]+nq_O)Kݔ})[4٫YC9Zr;AfA<; 򡃼p.FR<`s-AQhD,9j%Xr MF%Zr&?9\BW MsN`?Yu6;Y?gui?6˱O8 y=LApni+1xGI4 Gٞϊ{'}LA}]%3*cq Njʷn1hwYg^׶/~[-ɕm0S;L?OMߝxRy^\lt ќ+8WZ<Ĩ> 3>VǓYEi24*jDd셥F1v T#g+!BItAF f,]clxT΄`5Ş!RVڃb l-yA)ıH1WOp6pAh﹛]4*wC݄ӕv׊1!E90ed7a*"^Q|z(znl0SI`5qd@D a %IT a ,$A@1!S&aJ;f2Rʃp-A]^jF,++q_$9(c, s;鉎:㘗{*DA<3 H (P { ot8XX„ p V8,P>B6x@DM7Kc5Pro*&F4K 1Q1(+ V 1J\P҈ȐIˑcw2 gMmY{ߥw~!0B+UV npugr LWoI1 I2Itylkp1B s (23Cly4 ]#z`d`rj95YMN00]ӥo7uzRIb0r읲_ Z=W}x|RQq4,gYsKˍ!żb8w1o7]l4FABG5&Qy"1`MȜvQ#Vw x$ )[X1cP!﵌FMFSQ68["$Gs2ތ*- L`1.N>ZNVqr- \ؼ˛L*O`{(LN2@ѳUxiQ])nΞm r#IfG&967J&|9Ey; ]y߳{(|6[/vS+yÅϼy[0779gmG͍/MF/ҹ?ɯ%`Oc>3-Qnňpd&;A7d K&;G H=C q#0^+QJaΌ"8EXddY3^ǚcۣ;ٞR JH:z%88PΤF)I jia H8CvYI(J1,,VJ\͂Z5\| ji; 7 Iɭ[uѺ;ut߻jT\[qm݆vw@>ZYÒ,XsQΣaсJl ZvTa18{h` wm/*qW\LُLhSErqћ &km>O(( #yYELުIgPs.;>R:#(:N"ߑr 5ڣǨFPG $Dzzm!A`e98b#;2ZF;,3oESL1.5.hiJ#AT@bg?S:x.tpl+Yw ܬuIaN?WN *vus{?Ĵx*xZlhY)%ؖ?qdme9 JKm+edVT|k+[+v\麌]O^`<_E:b-K15gՊxS"1B=AHڪZj-->4Gv06LE{jE\AGcHjm %-JF,Z"DJQso|U[Y$.g aR'J3^%):I4FѨѢ"DEr '}qV ίk+mWփr,WQ 3R+CN/JyI#xui@HCKhD[72oki klhlfUXޠ"86 G0b^G@On zQmHcnuɦVFyIш  FcI_O~Nwf[weT *Q& g;wu?~}sՏ?߿ߜS_U8&{0 ?4߯~GӦTޢiဦ9/Ү,vS;=\[km? Ïɬ0"7ɛNl, ; d|0+CқtDO\ BnGBmB4b@S\uލ;<{?6D8隧2F)u(ɳSp@ vwϛ6ƾIgmEx #њ N\:]9nNP2rj8tA$A9GpI>k9Z# \ժ"$9 =&> W3ENZЇ>E&6(M"P8:d⨓-YKzG]wVԉZ-wS6:z|ɷS]Xy!)%|[ X[ڲΒi=RIjg {15sR=sKkg.nk5ߕJw |!$HPMWTG4A I8p*T^kfE4 h}/G,p4g?-z_훰pan_Rzj)V Jhs9{r0 IUqayEj:B/]-3ȲaX켖:9콿D3[ 5W],J*H* ڒ&oyyլv+$J.Ú_~\Z&7bK+S5Z}Nզibߌ!N ;l&N2x:8:cuоE+ΖyqZ\@>)l*7h2ˉZHpV&Myՙ\ǝ M OP6NaZG-,iB&EC1?"kA Ȋi+U:V;FY[)s-=4 ܳJB֎Čr3QI roCYC Z胐,rO8, Ԋ!JʘT&HhWgSG r<|1%P- M+Nإ&(WWc7_PXMp/_uc9bQXȶ/mLbbƀ0i? tD}<躦TݭVƟ߻ 6&6F=U%uifKcqIO\\@ )o䴳FFC4R9XV[\.it*Ř[T-ٯ[qhn[>{uנs1RO1+Wg pIyzE)zn>/_?IO࿞M2bb%I餦<m 6I"ސ8~"P-}o~ ^N tY&#.Np36E. u*PrCosw"26s9u=v?vA87`Sг¼ZSJ3iMԐ7qXɕq.ǶmYa<Oq 5diH:#WWGg'nkEQ?mІktwĿhK=mՇ>+-DM5Kh>&$ttuU)hǍ5[!íi]ܘ99}py#qN#^̞GQ~9vܑ1y8Z"g*ϏqrJWd|hE w |IѠsћ^^3?F>heW.~~4~\?r{OrJbwE )\wJWܩ]k 0mfgaA]@] },ݳex!đD4+ kzjyH=tлQ{ e=\55ą7 \U'C@͍,RJW PsfտMk51㪉5ئê{N_m Pb ߋ 6'9SLH%13$)VHRȤr1ѱSeNNB{i>ώ":ɫsΈ MuCi# @HJq$$s )5ThAE9jGڙn,y_󪷎^^U%Nwɋӡ]X/7Rzq;13%[%n~PJsbEwHuƔ9s+!B_?ob2/ADO-"51XKA <$bUIx )̓6tj=Z9ggD7k}I—=+#EUoz1HqmٯǥBsy\BUZ^ܹkEJ8=:k,(-%{@g)Ki_ɓ|sս6V<3\K\Ky~R _pWW}>WY`OJ+wR.b2;_\̋<VY/hTl6(ΊW 3]?ao<>3Q%/_ym1N[#f_NYRZZzƫ4Rʉ j?w&>FB칋pa5nra5R,5W7oh~ Gk{/MϦ G{k+,LAT\dnGWFG.֔]ٟPvSI)RXMJ.FL,돔ˤM=e,SvIģ (i.{"G#hM ;nJ\$D#e=b˵gێA_.@lҚ3J;*).HLNsLYnS#ZXFT H6L ,I/c'f\rcU^F51]yUy0wڇ"/ɡym؝ȃ79hhmA^Ke-!:P@-Z阘KXɝSP4t$݆)S{d+8BDgA4) !f+!HuU:4X:ByG/D=N2^Q%z|=MYtS7| Z'C 1FBNTcN$ :xm,c \q@ɈڲHiadEF \頄W"jfF[cEi6]6w%fA\{cvG vƧ Pe4P@\ sSVAj@%KIYS`IE52i&4wu?}|qqF.퓯3-y(.ڎqq4sm< R' DI R zŢC&4 8 $ /xؙv ڇ#@X Y9 ;F?>S#bG?VK .1D/w\+A"DT &RK(Zuw|xGGQ{tDQE#.jM@hArhq+hDIBѩs($䎌BtrFjVC TJ3.> '$(' l@7ҙ\^6,Չܨ>l3<9w?zebL`z!OsWj;ĉkYhމ\ K+%(`Dg߽>@Ѥ8IL3qf#QWeѕojWUCr߯ݴ9qV J <F>sk{Mc2UQkn"zAF 4dL,'`F 3&9&. ZSbȅƁw^#0i5>jKS$vL-O|ok9VJ\Zqh+-% 5ŋeE7wV[D0SbEEP@1(7Fh (I<M[h` ?07c9'\4ZK9QmI2`(PnAqgjsٽRlqmBq.M֭bV;QQV[fD}זh|sG/5{xfX¥(FqTgQ\O4m3 3*'K4H~AGɤd(N"UH0v?qbqljDzH@@gp) }H~ևbgϿ-x\(?0].?^?֮{ds٧="0)`$שy>d_ߜe̸w˥RZ˕xoo' 2_VRd ;=^h9ԍpק )m\kE Ԭsg57^yS "T 䃋W`6} AdeOBnn+YTw8# FQ+pпnz}*B @/~ ~|溍 v|լ oA,bee7.BD%oJ% F!f WoĿKk9Ҳ5qO"JtO7YbJ7mlNÔ J۩`<PmUc_'I0 x$1+#1ˀ$%OJ_6WJ9C%ISG)ёH8pNY3€?\VS}3{9 *4 r#y$Uȡ !F9&T'PcL+m#I̢#2 ^1ޞESb["e-odU((r6lY]GQqKDo{bH kESԞZ-wZzju,Vo%42՘ C{<):|} IP&s*FVƆT1bM)Tئl ufDnSWv?k;?zIrJQs)\ІHP;JE Θ %x}8(:/9 g;jaCՖ xʼ,Im:cAGσQ[^c̏\I1@[I+/3VAɔ+釋S1KgVKwN1nOsI܎\[j熒q碦 wRGA#3Y-nG[Ԃ1PlBFi5㌱}'mzumpfܾ( jrz=x\>-u4)d4mk`󻷃E ;HuEW܍+TOƗg&ag߼94JZs2mO߾ 9iTԎ]PoGT{65mK=emtx L|;ouQlc=k<ᵣ fVm~L? ZwZ7YoPFP1ć |j=%ĈaBn{Vo]vpT*[hy89)@Cvz 59 ),\y\@ dZzFv䉓>1_ռZҚ娶,s#;Bi|)M<]ul\gSHY22Q<2L,2R^oky!mj`G1&DfT(A|[Ldd]9eD{"޺jt4[Ĺԙ eu՜CeSUq|7(nD+$B8Q*O ,@k*yB4e'D ^#rFdRSSDă2YgH˦Xc9cizƏfY_jp$l#QrO-w_~ه?2m>TbvҴYBO OG?i:Taw!3;%+f Q't% Đ~7P*FFC4R9XV[\.it*Ř[f x_ƗjKܬR=gBTmPW`e6T鑢\w{t\g|gsпl%[G,NJ'5xoSIrDwjwU7Za- to~~>Nn)ټj Ӱ=Y7Oރ#٩tz8u^K\PF: ZTÁ`kEY. G G{ƅ1 Ap)'\Ȭt`,ӑ;Nx<`GD)tVn Y\4zq^@p|#G%"'U)bu7GExjx0D!p4a:YW S2 :ܐpNhc \Jxa<$z yi'[qY,XCRXb"s7EǙ]2zG5{@lD`v7Ƒ'(Y….8pB\.i-q. NDFq BT &I4Bt%^&3FSrAz:*ҵ봲K>80I0K89V~>㯘VZyTnH c!wrmQI۞vx ʵ8s$~+Ȭ3TLN@r{NjZݓ:_S6OV04(6_tK#Na=8]]d]ExrU''b6sz .h5kѶ@vuۭŧu1KcZ*`s.n{*B"w={vVx7\\1XyxŵMIOcuVJ6_4 ʏϮq-b:>p>0W:lʧ0~m|{åSHnT+K_լݰ {k^mDw3b?]kPc}!c(B}cl#~ȵ] <7=6(zTj+5ΛRO,4tb#' Vdb7<"5YhMu1ު6?MWL5Z Zkbu3ޱh@7M`edh&mchXh^cVoE":6aiM"[GOՊvG~-KLDb|S ;\-x։.S1TIT#j J:eV g?og/t! &c2IpFAdbdq5-_o +Yz'.8=½UV(wdϔBhN#+O\%5&[H'yC "hai) <1T <0.Y0^F ΍V#6JP %QkhtJ Rpc)[%q/BqwvLzp_G8l 3߯nߗ-^CLPOB,Y bs)|u<96ildITHUA1-I,v w^:crrm4c;9-Bȧ!E"596ShgDł @,%CfRr>6d<7w[8}I{J!<3&"S!(lGRK6&ނUuz1EXȷףCgfrO>jat)<GMM謾6JvW?/>٬/Ab>i3n>~ou#~3zpҾUmw"5%od*wbwK;_e6}h",n6 +$3nWAUL mڝ}6Ͱw =Iͺ :SR+CQ%FYAmN.1:7! "7[)$P]"(IolVC}$&zNG4H6F9|sU.TEg ﮥo 7 ǮRJyш{Ϸ]+RЄ(dK.* NJ´rPDrPgHL(ERHpՀ;m<8v(Cd(Խ\P`HJoi AIٟ9FP4p^fz*։r;FKL>pRlK;/,e]u^kcQI z7:4Sٲ^".VdX ɦ:#<@Ej댉q*dD.Ш5)j\:VC!1 rȃ68Wl]_(.+AŶ_"_??[,R"@"fGQ^xM* T`4WP%˸@-* {^sQg{ᨪ5ɡHB *V8@ڻ5EQV^\8scQLB8PL@QaO@xF|MFȀ9Q9P/ 'D{ {/a#x)^ZC7$'<|TFi։YJ}te3gI(wmLK\ŹNě.c9y+6f8|He3@qַ /e&#QHYu2Lk5f,T{-#chnD /[gˍ6m0_&ܐ] PuXԣ|gVW\p'a7e&<]f7=]/z8eڅ7tlk!׊V P#SUq bk$9ey;ņBgg+]ߛefoSmݾu{˛ ۢ7x8\S^tp;wG;iޖ/.zYL5ׯn{.>q;ϕp!;~?g:$:nH\I,𞹹e4r.s x18ir,F+k;uRv2> (;G hi!8] kafR3c**"e9AS}H nL~g;NO;5 $[A3DuJ q\ qI#FD &Ir/[&H ̀]L|0aI" T)Jky&Sx| 4o;c~6ƘZAd)ZDlFpX2RXC`6loO*xCVys5OaU+zi%ZjDJ)D40m2 %Zbw 0SdЖxV#նelM-ŵUZiƮPl yg j Ns&W 7n]\n/;z!7SWLp)!KpJP L:#Ig҄p \qo != J٤B:`^%`%4Kնb[lv8 `l[q(V[tVC4>F'ɽ1d2`nB2Spy0r`*0d@v4H5> 'H `c}9 AӲ=lM$uܬd"ZDٲEE,:'cX"2G&[,U ! # ʨ۵@&8XG!1)dR`Ir4aYvl?]j v=i5-.bgbe^&KBP.Th` Li$XY..=lM;LXOrs\y?>#kfޏ)Agnxy9$Hջ Op|\ڀNP]jBe!PozkL'3o43X93DnpT9:Xd4W:cuGCaRMn몹B48 MRFF_HoU Ugr`ĉ^ͧ UVƛ|=4| ]6H_OR2)QoU4l eq2Ϳl Z2;ASjp+^> LAU2)$ o@Is1irʿU9+Ϗ^'L*G@+G z+_ӅqB=`VJz8) oq)@$Kd ؄jřq-IɎ>%Nd>HHUa)<[*N_]&kVgNuߏ= f4+' SyNG巟AYUɯu]A=PӬNFj<(*sdʪ(W&7woG8@x&h<Xh\_=J}{_oウ.aˉ7!'ڌ^Ϭ'OMIS)j]o]~mvH$e-1jO}0>Х<;NRCP"U"Ն{L,c05%J+&9NxQ =^mI:[urk.c8hvàWU2Y**>9*FD'6tY2\uR[ʥMV,PX<@`%(.u.{aQcndSMmPkN}B=GRإ _1K1Άձ-;,*@'VF󛳗?z~_?7gO޼z7rIB-o@C[U/wZU5WWMۢjJl÷WzoO"ӧ$=7[[ D엫?_ ¥xhuH.W͹rf>W@l旕aܴRoJՍ}}{)"Z&vq%8c0JSa}\P𳦃o#2yU@-;Je*؎=Hmr>6E:0@))0bZ&SQI-p( 懋p%Ga!T+PE\%Uk:S,+jYy;yzu|g'np7V` qDoUtq* zu&^V`RGSJmΌ6>im"` dwg{1M|ucIfşh .tz WQ՟*CzQ6HF'g)Oc.*u @Xed0rʘ/œOW}_x0>tK 'X|(q-:ݛ }Wcv~04h@4I_Xvw_=sf֪qkW+ڥ i=x#  Ƙcd<@uy:E>ג)E 0;ee=y XuG?1հv AtYo4IV9E aN(Esfͭ$G!t@tq~-k?@H)eDs eW=`ꌏ =b ؃ YѠPd2 , a#*l h{ZAXHdcli%A7Dd%0l\ 5q6 ?tׄW 3XAAfƹ ې˗ d 3!NLNΥFDIg<2+@ZS %n2yKlښqj_|e_zg ,إM7m%Q(E7ֈ錘|0ce`sXxu@sAXs}ecI\&=v@Adҳ@"BI2CS MhHpQ0.D,}DUݏ,N Gq$Fǩ]66׶ڸ#;E:|H7&<$ pC|dD0h$%Mv 8X;d#\xQ7Nc-L-Xhm 4wFm6*GT KIwT! nM3x%x8ĦɳgOv~}Y,7ok%\+:fY=2K(yD{wGNw7q|D{wVIwG)a%SwHD{;=mdq/4. F&)< Ir4ϕ4Ty01QnkO^!o{RI=agmI7 ~?w{{M>fn/ںȒFd߯$ZOJbVf`'|TWw=X]ʎzZ|slb b,2HY!P1iLM& 9wVe;E/ߔ)Fpd| v"uS? V n^nmMvaClTY4>Z0˜߰U,<-1'T(]4I4BfpQ N1(myEG@L9GF!-vrq(k$lh}`N-iip7SM6Jn3\wi[0i9ɑka\# bQ#:1T#o+3\m.3nElU=4O⒀7y)]cLf6kCQQ;5όeO/uWj\5}bz:k%Vk05hC tXd8ycvCF LQE3lWyFB Cbֵ?01=ݏ <$PQˇAt:r왫qI$p \c-\zLl փΐeXiKd1ӝyj NXbqv_1DԆ 9dlK+I\$QZ1 R?% >{y7enTTBX!J*7Z5)yyUy?zc1nGk5X$|[o=Hȩ? ^ 6_cW[y^hw_W`+T; .FV:cV $9$ /@fFyz#xA)hյp-ۀ%imO˘\\ :6%C[g3ޡecvͱ厌Y*VDNy,ul<>Wx|q=z|l2M;qۥ]U/iwcn1L8P6Pv+RzbRܕv nU@HtU$Hc)9g%x@sZk41jّ\C"7=>K\@,:Ql(e- er [mGzNgUv\}4<1"5:[_ތRLz7쯔Co3YFjyFe4~pgN79<*<1JnOm8oeʧ`'%N 6QDͣ9yHy%Vh"8} |+uwo|h {(nb,Bc&ve2482uo)1VI\^#J/rˮӳoyw\³U\ֳ("=.y6G\7Zrec.zDsT A`,KQ*SI`KK[mkIJLe"UiVTΘQ!mȘ (sLR'vZw'Y8NqyόQ1-zy轕WO`Ϸ5ڻ9^0ƄU.K֚ʅ]՚ʅ-E+F,T.| T* nbu%UĒዺzV+&a5EcrCdt(zk2$U8'a q\O_'Ѭxjwx#{^z]})HM˜ǪR5g_;b)EM@5ab'CHM~v0, _owS9*9Ad*a{Iub b`:|\J4< We$f.C\/?![gtXk%$vq9|n:Kì!PX/ nM{RS?fJ-]ax~\/XSg(9u~)碫_ >ʹ!Vy5oӿ]aNQI?u>A%v5_W PK"};*:Rj+Eq/v@NP1\Eڊ5%h2-sUa5oưFR[|%8/RKԦ_DxjMnq *b+|xJ4k1'QW]ۢ"`I%Օ$B1"uQWFUĒzJMiUDRW]ڢ[乫%t}JStRmaړ Gl[WK}q5I;r>~cW+ӪðOj{oyJE]KzZ=J~ &r`v1#.IRjSfHY+{;˜Lna~JRW6&u͡[fxx*ɊIyZ*MDqśyIŕ,&ܪ>jW#|Ol>@*7m>| |_͛Go8^+eǫB#‚&U-Ј ,1җP d\o 8f߻Tp464)F~@X3((dQ"onϗ?U:pn|-b0Cp*BYR lUVT3RuT:aD=*z 9'y+.wΤfY`̲9S˭o2 x%WCp4i!▱1kI\Wh;CP?VSjz#ޞ1A߮T.uB WCz]> akT_/ܹ: w.g"к;5B=;pOWw!vy߫ ϯ[C|ZaX*3}̧A{EL}΅%>Z @nQVxt(i'/${ IdQr4GR5YrXx+!:[\ꌴSRWIF͉G8䴕b'ҀXo_$}EGYXG°DEɈlOfDWӶD'[عG'#Vl6IF%Ϯ&* ={v$[NDyvOLїŧig [~(\|t13Ehbw0/uyQM'̑o&T%gcìĐס<:~\=8JLt!M=TƨB#\G,YDcI =|[Vc̸D~5lĬGU;8@8)x;WI)+F%A&!W'H $b.a1HJ㱦R( \D[%ѹD"J 0,ϥ7dR|4j}|P)߯_~y\} <6jaٞm1[jK/ 48ǔ& P{UDF]El:wu袮^R%ܗ@? c3F9q͛tbI\ԇL3fTXT`#wR;yx\?M>YZOݟGqݻ?ӫ^)W5! +P&}P"r 5lȈ"p%Jg092$LPQz#T6]I3zjqF, (@\B-:b1F|<+)E9 Ք%s_ߍ})MY&p,v\/"~X&D@~1̘X _"jhw1SIC)k2s? KH ]\G~1wRQׇwxȊ>q7ŝw#k?-(~~d331=(_3eI`u$̀%~z]BP7駣^kPR##]]B%1)gB;5Ş8[ ٿ=˝>K2}j!;ruP B/.zeCiㅑJmcB;XNxM?D2'F'2EK>+YcpCJx4軯1`oQ^j̾RV>;]z7Pve9V_DaOZY5#&}3@t12"} Q7*0SI _cװB 뤒%q˰^jTVrE%$;O1}B 2Aw@t3$1s<u=FGw7M#`ZBϾVg/ gq ^_,7>0pM`}XH}Gp?"H8~h(nN(~B~PœB)ʘO!6UT43 r{_avna:a<8(W#J3lU3T3s&avAM쌂"J ͤuppcX9띢 V:H+}õG5~ 7~ >Yr$9~VK܎ıYM*À#تUD*ŅG<T0P\N8k2IQ+ - @X+^1LH^'8 #og:RO]٬"ˋWE")ɻJSa23[#g?ΜSB8t9ZxٍǻiߗsXR3`PutP vCMB g«${A DD/r) ZyVJ*: Z -eGz}P{e?-z?-̩eJ\D?]ka4wV^KOoֱ7I-[G/~^3?a LZ6$Y&$ȅӢmbtr3szz^*wO7-+zhy-М9YZQuŇh!HО2e'PNz't@0)Qࢷ >.I^8;8=Si4IJ3Rgqy1$&&uOHcɝ}% SeBI;|Tu'v_`;Z<D$.^^ q:CH5,F CvQeTR ]ta{хK>0LN}ciX5B:%#2ͽOP Km5C?7i6%]1T+z7(eS`1J dc9x֑@p֔X%i#Z:{~#;63l3N8ߍcd;`st2Ӣ (3E鮔 {jM\mv7RK_5|l%-O;/HտsIl%c5 g\2aj{?H3S0 isķ(>=1I6Ҩlɕ fc\OVk~Gȭу7;dr=8㖔bXyɳ+vh)g~20=LI^Ugf8+GBy>2#s:(\ntKҏo.,C:om4Vd{]Zgfͪ GVs>LAhE^M RΊ :B (5h6i޺7k^̌cmmӋ/^h/<迻ͭt]3 W~ 5dMHt4hYlfY^FL82fb9Gb'1iy {K6$F \2"i4 ]O/s~{ =9FY坆N5?gǽؿ v{W?c~\ط?OΟh`8k"AGЏ  [ --0m|W >BOl7oz^~~Of B^.8)NSsU燩!i_.y5U?ݥJ1xfu@Cuu^î;uM4b }㊒%wk5!Vr938H`4Sa!z|>%" ,6}u1:fbh$iC X 6[s 8.vZؙ:PcCk`o0*g@\7v值!a @zO$R>{oZs~86 Bp?v+ȯ =~ ےlՖv_a!AZK3 iL) "T \2sMqUVxNW:@W ЧݥUz7&" ^j8J;HLJ8xEYN&мขP]?yNWI* *kБpz ZgyT1=*߬1km`1 @Wp$>e;Ό5edmhfM}~W })_iW̫=)&I(&4/.8~Ρ5̴ - m?;LTfUM`YR!*%*`%%W'N.):$xdRh"rQ̸"dm󈁅Rd0 >l,BjDU`CL )@L2'3AS [Fs<) c8u;W?uȫpvMẗڐ;v2˪ 1)|xףi'ZΟLN&%|V]vw]z}o'gD`fN+j]i]Zs8͞| {6eE-9繞yn~F;[y޴Y2'%k.^@Rj͗TVmpa mdknnL*_P%B6/C!ؗRPWb * 7XhM>͘j<;Iu܂'R8GW3 IvF4 7ja1ӱ0EKF#fU܄R[k(YBf2u- " }l\ꃏ F1Bs`d>*m(od% (ALIZ8Q\qM; *hY]B/"6RIkR0FmߍkҢq|L~ ߭Qߛ>[/yQ2J!E,q R;@/4m,|)vъ&)UhW =Kd<&3xIs-=/+j[2F~ViMBݲ,T,|RYq<.|;IޗMn'q:4OA͠xadƔINN e N ENH\Iq̧v,h`2!ER!hHFd+If]%mɱ-rK05dl[q*R[wRC;g'ȅQ)RF@#3$榝ȵ3H!}.;^ ìHk*!2@ZthH$dQ 6j 2RVERx߲B/zkt9|F:4 *:`D֩ +%mr2}vszFêɅU#-Ϫt ,M}'O8Z p ma yLpI:sPRGGfKq ײ9:yx>N)HCz?o0Kh;}ԛ ߿S# g/~ק cA 9CG`yr )%Kl0 ~9vʰmͺ - P>a?|^jC c)MC%@2%f݋gۺ@ԯ;lYF~7ߒ(US=}~ǝn!s8-ԥ8KإZv}ƷurGi"گvA?n|]wm7dSb9>-+T{P 'lU[NdGH-mcS˕3i5zqc ul^.{ӋpG+s~ǟhmί޿;t>Ka49 NC 8㧙o/IqI<_ۅٻ8n$+N<ǢOoz1^E%-2IYϙSdcK.3#@ĽH R.+z(~գ9]KRF;w7mwa>4GB=#\W*Ǘu {x^WtkW+~n^k(*w:{{49N2CG:hk:Dڜ£> ;;WO0pTC`heWtk (ܦd`W3WĭMnvGtPE}w.o x杍7[gmP!E'7.{pD'7A;:du?p\?ù2OJ~+^Ծ]ڳpݪ[>1Q+XY7V><0w6u<8~?O<4}G&snٱ)V1ZbЪva]L.ErV% tv\KawKPO?# ݳ>@rtPo?ni_KLQ*"C5a7y5%֢ؒh13<$ҡZUb:Z%``ʰ-&\UqO;[xHj`?kνRE`֎hGZ m4޽^X֘sI:XiݫSL 5$Š&J5xZ.wQ58b46c7c6.bllʦkmSj-'|&<L%ղȺx8(S6y +K›Lw(#F0U0 +G~BHrDKi-cەrߨP[wM}'K B"/[#dm2[5✪H%YtlfcZI9xu}6:0G +ը5|GTKj,AƑ֑ `ʷ iYɚu#4VՐRBoRR6B(@&rȁ}iEj"Z$MLkƺ._#m'-87.V w5TMXRY Āv40o#h`"z[c슖5Kr3xW9& d BXV-@C=X a`PgEso#b#5G٨|x<@UcfGaQ!VK+]x&)Pq^պK9 ȱ!3ش~ήZ*7AS1DC2GRF[5iȒ^ߐ@t[PSJ Xް+Wa5tjqL !'ucك|PcAN!I$RDN+k2V胷Z0=uccƁSA^ $/×-[#82ESYߨ ~uǪp 6Yr& N;nƏǗۍDz;ɺ ȓ*aR֡I ,׷Xk0"1;K%E^m=PnD| \&1´d7@ @0 hcYm^ToÑE y 9z>^BK]3:h bF$ƌV N}K< LgAoG[n, 3EHFir9 <@ pwFDM0a( !D>&~H529K|j%s?cu!:1(Hh0,xƠ6T 5[Qq/Eo۪ HX6 p`_n6XݳR Y\Z7cs<_M?|q\)-״]̨^\י* @=qcciD3 .=܆K`$04#b\1M*!jӲFj0E40姷\ z(̈ML,:=p7,JxKd]\$l>t#bsvDRfd*DC:&L Y ȆP`}c^'ܣˌP>n7؊bDRX&j[%r)&o^^0 `\8˜N28FR%d$fjaazPu\SҡJ&X2'sDzP")[Īh]b7 rȎIyY 4&/$+ ˎN &!K# :ϵ|:oW,<Jp7.8 P Q~,n:;U*Fiɦ@^ζxb9[k ya>h)a zCٽ"'wW=Y߯- };O5 ; p ʅ^[\f;|ںCWGIl" Oພ]VQ:suA)3F1k> Z] \EuU/?L%ņ8] 8- +mՀڏDifHi6+OD`x{xOo! (XnKЁֆgխSԁz?|;7]yW6dzB@g^$[BJܐٳ!Tw,N?}or>ퟮ}NOeTiuM5[ΗDb? A#FJ&fiŹ6KCFwGHws`r$hh\~^9͚RXqH6s 鸹zqLcˮ6( ދ청HŮN9˟ۇW8f#^17žHC"㩡75ЛzSCojM 75ЛzSCojM 75ЛzSCojM 75ЛzSCojM 75ЛzSCojM 75ЛzSC# 4&H3 hLZ矽-405& 'ms_}lGi ZZ^_R@WXR5ī0f:45L/\kxQZGxͫ5s^ck}?9?NBPbM9N1bD)TGcrG~sB$QWbR[T; v1xj)ZQS*|V?[nmcwśk ?oꫛ>-M 7%g?"jիA>2൩ϧ}<=ʽ"}TotIY͗ե;uk7?"W_$~S/w~8Ž~ V?_|H%M=Kߪߺ#a-wx@EOT}_{}ϢY=g,E߳{}ϢY=g,E߳{}ϢY=g,E߳{}ϢY=g,E߳{}ϢY=g,E߳{}ϢY=g,~G/o}Z}w2΢N[+u Q^^t@wF**ESDL{Oπќe;¾#l9PdKy ,QLI.xQU^l?'a׮0H\S̡d;k ɑDRj$Θ+ƃn 2ggsa+5+$.l{V/&oIkb<~1yskҰK$qxX`?Gn` <֯9ϛ?[V`u (iszLa?on RfM)```> fSPueR76YhYz N4IϞF>!WvȚΒz=] 킉&~&GI9-cYʸN.qѠa?>lY. -Y.ͿLӰ&@WUYB8}wg!ͥJ(6aFD% 7Q>id##iV86Ɉ' %ZrJý=ie-T)]2< $ raC65f翅ogӠ+Pg,fB~MnJG ÆElͯwУAyz")hvN Xߝ?z"}Y;FB Lpg*f_>] 4ltgdܻ'4N6ʩy@[S3AZ~ThҒ-ՂߊUYxdedx>PɤJwN\o #C CD"??scW$Y5yC0B{$4DxIÊ D2”5H )q2y?y{9}.XUkFվ^ddwg+wc-4h}*.fR:"Y Pޝ/nKdjRzQp:+^AR 8 r΁0C񹖌@kv<ZL4Иq>̑>4Ϩ؞/A*W_ n܅-.]KɠMz~sqC1OoN0?ջ *JPx$ߜR0k9 ڎ2%ts'=;_-l&J*7Lg(cRLi7d6lȼϬ_XLo&W/>N61{lӦӱWeg(Cq!e4We4 0_Ra2D[uw #@B0R5Ϙ(VdIM&ZÈ7z)RdmXQ( q% 71: 6H q&2Rd3Np`ۜj3kOQ^(3TJT  j`Re)2-<>7Y_EïaGcH=49]bH&)ڹ$$Li# itg#yHe?ڊZe|/.7bf g/fyi5ә׶(vvbcA aŒLɜFRzH3EKF7?^Ώ~>^/v{ƅG WXi(Z7}~j?mU~8*Z0˜o*an8CPZb)T,2³0BJ#1Db81(m9 GSOL<_RE SaBT[bEltXdƃ{ԡNj ǛVxcB]r`d/h uu2h6_fW̑0~M.F {b>1&,lW"3.f/E\5v1+ >GЪ[¶{BnV٧A8׺Bo{8n;bS$xy̐'no4w: Ef?@օ󧭗5;NLF Lہ>E*9k:zTS|o|v5/.o0]ѽ`?\ҹBY>#Ex r.q#, =TQmEv`7uˋ .KS6CF~Uk{ßU"[^x)Wֱhʅ:$*<ޯ)k|!0ܽRJ$JĦf)\'u,"u7l[ivvt{֒/ |LpzyHWZ7nx_|;)dbz8qauU_/nJCg*vT4B^.6ZNsT*74y|ҊT\ /"ap" LM%k \H9EAɘ d娬6X!K#S!c<ˀV -!ZbɴvpUB9[RHIfG󈉧L9*]ȇXp-]+q@Gτ,]J>%mӉr@IAIVf\ vUlGԼH.}Uy4jkk 0CP"!F\R Is&De7Z:V D3^O|FhGɒj GSW&$lIO$>{goO㳫qnx)w ^yUXołXj X_f2:Gm+lxi[Ex[{&WvSU}NJ16VDj]^|W@y`GuB30LO(P~* JN3;RMH2SFw^S8I C xZ\ 5aAi)2QdΧ;y VbkK'OWh(2sŘ*sI}6%kVlQc6ҀĶת򘡮 !fa"OݯD6-<*PGֽK܋iKJ /uo\a{2n/WǛ>ڏ!t.FMx773Y?9eۏ?+M = gZZ1o#,X^s_Af1LzG;ɓF^fP7>)}%:'·.(c,'Dy "9mrc%BV5^J#=i09]KӋ,ͽNQ ]˩孷*Չ\c%'k~mRcc"D FA[&bPUYa&D)d8[Nʅ`I*KoM.C (&Βu<'ǒ,+Iw IAɌӜ[PRq혋hp9J6$ @fIRV$3I!Ip(X{"nkK#ыA !01` L7 PYZh&_DZ_2p(@J`Iݚ"J#&+fd6rULH.0Wk_-k?{xivVq rK2;\>Jm4-eI-l0ж3۪,}}?_w:ҁ!IB@J/vI{6i"]DZmaz4~6Nt4E]6zp1E-Ύfx24:^ҊGO4#\V`zu0zG$^k_PKɫVcXlFu!UjiU 7D|Wu.]֟z[J6w>DP2Ľv C IZh]6궓Zs!Q:OE:?'-t,R?ae2˩FfӇ4tsCC'9"X~;iʒ3__>Szq| 63@pgC:j'@!xvjI/CMJ5gq{R^K x/ےɤFܠqqEl,=D{~c@沵¤`b(|@(ʐCSaTu'p9! .ΤDּЋ#@Bb@qt 2'"g6Kߩ}2KQmXmp䋪5SU+oץ YRsJikh;i) K;q v 2`sJ9VJ IA .1A[,`eRR5c5r۬Uj.4ut{.˂; Nn逞|ye'vAONήN'\cR)#A5)u` +I85h|eɤ͊{> &CR)F.YJB9THF2-CEfi }f]9̚@sE2!CȊ-隄,!H:N,Q9ak/X?vՈFF4:&q 6(ErIE*2t2+fWW#AȘ)#Jr2Y%r3!IíD,iRtv}jlֈWGH:^,"%EWY/ & LF먍 H,R^JtYK^4qG_MAU ~U !5~v)xBhV#~]!0AȮQ&(N+fwScϻKu.ť;d1B2t*@ 2kK6$o,,0vn pCيܐu$Й̃:yJ|e`i0kZlф~.|V&8\=>U'LYwIaN}'CN ףכ溞 x*8پufcn8ۗ -uMd1lExQe94oS>wEO} &rb84Eth}u#ת-U=q_yU*҉ŕ;|RG*Jɋsݍ| fȄ" XZ)s&=||&ysڿʹML}L1a4?"}E[埕!J@e/kԀh4(Hgꆨ{ g^雽bosO S2yy/.:SڳCIX ,Q! Bօ#SY]NwK>=MCE_uw>*Μ*QsDf 3q<pZTYyt?kzAҤ(,kQYVE-pUG -}'A@#u 8kw8Yw{ºU:WlN8߭c^/qyˤdZ6K<_(%5o}!3.(K 9CK8bRvZ(9Hg l)5ofQk%+.d¶AG$\-Šd_ttvFÌHdoy >~*a`&̾~rYi}O/z<%%N|2&kRd ݏ<8gHe< ))K4?/߯D__5ڨb켚LVWѐhZGϗV֋Pe?nEonm4u?`AɝFGkB !4 Tb$ʠ`JC= ״O^E } c&F^P,AūŎmЅ7n .;P/>e_G!wuH}__ppZBg34RuҒ ̴bf pM0zz~Yb;OWV%(]ҕħY#V@eer k]WT5A~],}yȦ~׽.jPh%aM*W.YV)e*.ZoߤbI{-KNm(/rkAMYțo/~|{&l]aPϚ~:;>ZV|-/@E{ozǕ1c5CKk9o%KDWg]v\$eFYpf*E35>F&n Y5f^GЎQ۳QҩISJ=]aWs)l7x%]¼9@O`g{=嬌A?K[AIaj@ӀZ*i λh,NᗤFc}DJE( uƤ2]jcUh`_?+bbcJ׸BP^6 K L̥YQD8ZXrc0.WF߼<솯~mʃm UmKJpj<@Fa]6aV&B.aB ` 3aӼIW FWKDE CוP:?j4t \Jh[WLi` 2JpEWBu%kN\We1hx1]ug]u _]uUU7ʵ tEtEvzHW(+Xlt%rѕкJ(1Jif+!d+-EWBKוP?j"k_X~_܏Yrɵ⋵6hcr])*yA_xQ\_|58s13T3 '{9Z9iɧUɸT.%iTajUQDh7`Jhrmyxte:ʩu%>] n\tŴUQli] fiaJ&]PWpt7\fJh;B{*r 2~Gmҕb6BKjJ3 :2[f9lU7\kU7SY'J22vzf=p`"\t%}]uvuٵYC `8A: !ɶ^zs :њ=Bid1Zzng&^/6x!зZ,Ir;}.|麷W^Q,%}tlQgޔDME&^?% :oX@9OkΨY⼘ͫ_:૏RS gY5 'vuٛw؝yQxzu-b.E7}/gLkL#w*Mț̨0+M8Ӎ*+RK~\FX rIn6JO9WnSE odg?#t4ʨM6]l Cb e=.69،th1] iJ(f!NzX2MtCuym=) =&; /_^;'OJ[X*rmQ|wTۋӟn+.G'bנ}dϷiL>ϖ;Qs=_y9|-XYmByݩ{G`-;).Kռ(!G==}./_"Uqj=Rtۢ@EE2m>=LūKOl";B/*9[ViyH@@S 6T/mNWώXώ^vݿ>u-o?.nzYLוPiu{ڃꊁ53Bp]6A%r=j< кt%\g4@pEWBa0M] CS:ƨ@lNC d+{nv+?!]-QG'`jK=wnn`S|]IWf=0֘\t%}u;QLi3 Zp\AO0{ qJWt<4,B6\MRh ]L8izFS9u8ltŸw ] |R(LȺY'uålt%. ]WLIN2{GWkc'b] ZWcԕuo XQ>A{n}'ՉRt5B]9o 6P;:683t]9oNhsj]1WgZ㇮+tu*_B.\t%o] eZW–Y!D ؾc7hDk@nz`qW¤]7Fg+DJp=+uCוPjt5B])`@ Z>C~h4!:%qtJZa˅tڤioЩZ@wxTdOSf3NٙvWr٭'l_hMFC  sZ`'*ЂP40ơ)d+Je+53+?j2@gP\к?ʠ&]PW2ҕE:] Jh= ]WB줫ʡ12ҕkbFWkL.Z燮+ fu18Œt1v(] ( (ǤgUPRNOҠ :f\4̑ٲvA4 &]}*m%[W݀y|G\Rh-zZGJ=Z;jG 銁JpEWBuF+\N3+L6bZm+4jtWf5fv.rL8ELR&ZLfh"'fu՛wG}wB!9\s>_A%%S~,w'y.M!mFo~BggU,ODuu=[{|zÉg5Ձ`-ouy!Z1o \*Vwή8*hk>Fp2[zݬ}|{o :xYj7|ܬ],RSğ(TݼRԪOWѪlvr@׈UT}"yO~epv+5|5#yC&~XCsByNVrx? M/t 3ƖMTƞ!EEUZ$^`Je LXj5mQ当3/nNMy7_")~\.K Wש~mόuy=;/wj"V[J)s2DUT` MP@XUkٵ}]XC+!֠-(eY֪|`8mv= MҜHޭhF۪ɥj.8UP;Dž_ok J4/HK#(O.PCJ5.3# QBɨkP75ƦA`JJe7:M] ruy~~{MMeiRiVIFܒ2Q0 )&&'hi.N[jkfllȥۘlkJ5M]r* Te`]#%0S /֭I$c@PrGݲ6L㴓 Ii.M**,ˆ |oGg4UFǺikиHr_O K!Uֈ hGi,V~;+TќҦX֡R^%CeII+s {s#]UT6dUg qURE9Cc-K XFXyg@xod!}X4 _l \v"Ƥ)L}{uWR|T7+=G!a[ǐ,j&K**ؗZdLplY;njZ+]U.zlbP!fctKXon8EFqv.1!wW']&*˒i&XobS\u p rf7VTHܢm wmY4Z~ >K6Ơ'sDYjIv:eqf[SuONka`:>vNq2Qh? QLnydW+2x.Znkkcnu+yXLN qk fCFƺzzEK05eB`^!LLF#)Mٰ֞V0*P*JvdZ2ɷ Fԓ*eX؎~B܂n]N0t\"0S|14K@gSO` bX#3إ~.J P@Sѝ D#)#Ǎ VWfҞ5GReߑ@$6( ^i;+a4tY6aR,J "F_R]zFӘyիf!ѿ&伖H BD jJc`V.2@b0۽@VQ=r+`"Кe8M2ypWᗢH+fD$' G1!D B /ڄ{aV9 L|0/6۶Ӳ˼2a(qcMhH}辬 kIt u<o 7|t`u~R$ 8Հh%᫘wV4<ܶMR xY+N";>n}^G.ߚwy4S U%VX(;KSvpC$*Kt]_>1&!Fuc,C)9iDκ$!;V@r0]Zx _3@[vRhň޲N+}Ƃ0-Gص7Fu& 35)JL `T@AoqVUpVX0&a!dE e#@ DUb|؀AnKHLRli`F+fVo- Btii뭊#X}X@h& %l@M}%e Sa![[`M.1ڹG<_mV0l32·mblW13IN @ztqt*D4IҢJ$v[(f:%mCh5(U/ ;lE^QH"Ndvikn`'] +U 2P(EcDMnƂGt01xPuJc-й'`ҕȪTq19Mk.YqF;g jgҼLFL@Zv!;psrgӮA^fiN&xZScUѫ`m '` 0²¦3 _X a4S"p#fD98QN=zMWPO, %׆.In5 !⸱ކ`ݤ>jE#VnDL0r( ;ff5$^pIF i~xv@hbr)vvU) dz}vIT_N1_Զ.< 5~]M\q/=V # GtĻ W*}` \m9lzx_NK*>-hLmRFnrY ]9($p2<6ƞe.x"EW-q>mvn1|#.=k7w~jDJ`X:kE>t3=jr@7b ͻ?[ Y\~W0aEZ6 n/:l`2oͷ 7*-r9bouY+0Uu}}֧sI$#/JFBq?O:ݬfu7YnVwݬfu7YnVwݬfu7YnVwݬfu7YnVwݬfu7YnVwݬfu7YH}<(]Z]l<:I2d0ػZJ2awXdݰ6#u^sud7rѝ/ֿ'Puz=…iIxE[YͪLJYs0[,.7T $ɓ(vP=9E'^^ܭ\sz5B#eҫ尝wwo_|v*1N^:*DLN .R`iMx*2 ݛ&FFIm;ݓO$ ?8n3܁o`s6&*;99K9b?#Ozj9 '[oq\d!SCm!7Yf8w0׫ܮ(<Ma~cwe/Wde]tAn$L*=V:IaK׾ ?Nc6!~̜9* GwǑGވۏ=>7j%)s8efGĺRR;.%5n7yHJ/"߿?o/AIgYSߥ.}?'nOp,٩}x%82U5sA!mcLFRe 7W{\NDs`C06<\hJ.Th:kDb#tljS:fYK4&)X?>ߌy6hH/6@zzI;p+=F,· Oɓ N~,9̥_6zy]*?=`ϣ5M9ٻ6W}J3v_8,l!ATۂiJKEd")Q)sÚ﫮?Q BJh~l6VViuк͏,S3X"暷C8{|5os;]+4&#C:|cVgWJ{.&&-~T/.Ѩ$^5u_}֝n b߄=5l`Xf}ݽ~7n!p蠵=)`hK8fE%"7VGK"Eu&6OϿ3"Fh۸Rh9ֲӔdUIQӚ4o]F5 0O~[}]tk[Ln&p_.q1;$2JUR_U]oUzgZb.1hIiaӵ FΆ@hkي.;T<7? UqRV>䢭{]ym,kBᅳO /l|/M?L/>O4ٿfUmim"g6$n_6*=cَ 7*#j6{PPL狶b WWz48c dRP& UaXG)Fclq?Tp;Ʉ)1Z]ДSde2'̖ ;A>zôb.f"UKhFWeN:Qn( zTavgS,pBFrͻ#`>nOk:y~ >FLhK ;]?J`ptᷖiƏ'~GI4Wڛ~Sa 2/yeE-g<[Ĺ%O+%o,oF[% 'Ǿ7CW|]O-fXSɱ%Yaeg.Z| ?`/X?ѳ%\}MK|_~liLνt[{LFZl.駿?[ 3;ٿ=[:=6lG.s:P8pb-U1lgǟ2lpr\NKR'.la-13V,tl-:_+Ȝ䵊qXJrxAPH>X| 6.AйhU pr gRޙBNѽ{g?]\^L)?HX$Y,ϧYmV׹_~sl|;ts郪7R UD[C vɿO& KXh7C؎H뙐g@Z+jv]뜆j `U:2F:@RxZ./\FFX5ªꎞ8ypԖtTrGTk˫l!Nd\x6dwP"!3@TiH1AU7@=@m=(!_GY&% S&DhcS Z!!a2iLit [oi獯ʬ|_#_jꆙ@B_GL~4o/ut|)dWVVx9GJjw''N(DR V_*[cf=PT*DD3'ZlT6}r]utE%ҁ$6S E*UJm-:V6<&;#n ͧ<]/c[="z?[6wږʞp=]--js*|/.LӭC!wޤ.cB{}C0^{ayvN5:;5\=qn}]\Ό+?@/`ίˏpbQ6zGjo}ꡁj?Y{4P/! QBbvSl`Dj ,xhZ9:QYE0*̀2*To:3$$hD}r%!*ʝ'j7qOT~7[]7)4'/t]~^hVy *D*$Z  !ٳc&VO} -@;7LRjJuDEfÙ29!$S""rۨbz[nPVf iCml h j om>_}jxK|5o CL.>OWb3b%K)XNmNB\ sB 8% cGΨXbYdً(M1{H PPUDd;8-v y2v.U k]T, <+`AY Ast&y=NXcN4CdAXtbk ^(y#R]QYH5ca7q_7nT,b7xEG8Z]o|F Iށ2 [Skݦtybkf [EBoفաzԶ L*)-S¤ ]¹8-,:]-OMKjCgH]b)b!D8euX8`DaBZ`uV7K0cSݴTax=< 똍"_UT 1XҶR"Ż`e2CWFT9|!t 1Hx22$Y@Y#%rHlljȜd^GTT/B"[ht6FNE,8Đ Dp_gokoKze_A)k̶q?ݙ%Ζ)۰S"̫/d)^nz|m:1#^}&upX:g#svoҥrGتn*쇨+ fTpuL9l5xZtC7Si1LJYy]q}z|e d`)Ah j  F 4ɲ .˽-P߯2: tiK叕ݮR6v\|{ߍz\dG#+BKJ,bB6Ł;#z}K5b`.@;Rg0F^fm9aY2mu 1т )=1kbvT2=z^wDta(v=eS<9F>sk І/0 /BL@ U +&uRYA놁)[G!,bbe& RHZ (|vk &΁FǞOs{wˋwo<[Wdh}@*VZTYyc*ި-Nd`+$\gϜߌAۃ>TK6(ٚ[ICJA B lYs޺66'!}BڦOyY^k8Y7&A@ `PLNBy>6jCm-RT ,?Q۟ji:h"jbՄYZPU$l@" dQb9fH}'Cb}$7eJ.yr3Z7\_cTY^8F>fQ;ل. 7<.ھVtg}>ObM\s5;:(ɕg,bNja~33?o].# SWx`+2_V9h2МSri\7[m~s&oWb?IV֯GͪYP^Jq*89 /&Ur-rkknFݜ1qiRٜdk7yrpHYl˧pHQ!i F$%s n|@wվQ8g)VE> `Ԟx98"iV W_M&ǫ >GZHp>;!55ЈҫOW+(UsksjZͳYX<~:8M$`芘Su4IbmZ5Gy{b' e$T"#{ӶaX0@01ɴQyN>/ =st:OjCZGedI6WTkC\HX 2=4!}M -=26t*'l4Y< _ѻ?.?=#~{ןp <\AM#@ރ[C1)54kahKκY]ƕ%2maJ|jvW{k-@~Iy49a2>ɇNW*_Ava+H6"h6M5UܤJr4*D#ӸCZF>uan( ʣRFN Rx%DHv$ͩ/8Fr AGJqX꼪o]:,/w<.2zLq:3o9zɡ]b P>|Lh/E9ؙYW^U4y.C{թr-ηUG$=O`şLLfOS֨xB* $­jqO]5 ,%> ~Gڅট'k%T'#P JR Wkgdp{W!}݃9XEftz6?~{~l$boOt) ȏv6 Zk'?۹'1UZKj xǞ[&L$ˆ0INw.vm ^ oj䌳^5߼cA'Oqj Ϊ7>onOjVD̂(T@2ur<e}䁲|@dz|eeNdp!`4D'J<hM%O@NJC"o`LNG{Ѣ>7Dܞv=٪)ݵ ]Ey6 <?*.d' ْ|x܎L:xBb 䴳FFC4R9XV[\.i)ZcKnQ2N^|'zr|>SٓagŦX$l+z#y| \,wq.׎!ۑ/:vE=Po{d'f(A:bvR:)O{MȽ7$:?T{ MK{ϸ|GD`x6_lA}zޛO%/X|y8;:% \ru-#XDD-@,isȦ^#D#d^|W bx8ŠAK <2+t_FzxLKg5QO([*H\@D ,hf9;@L &qT9* YV ! d>iތOgyHޟZw`dioV&52jYrCB AE{%:c?( 6&e O3=^^<7͓,R,wT5\CExeZͨˆdw$I5O͓Ns>xR)15x1;:rQ  X ~G9O 47R_L;SCB_ uh"!mD}(*mٍva^ ^*ij7nwY.:5j{5j|(i[oA\8pMDJTǹ|2Kw6/o0x͵UɷR$U&gG?.gQa~>=;[i 3}8s~W2aΧ½ӫ6q?xsx0ֽ}Ά0ǧIEl.*5Yfd3Js-&ZMNBtS:~.D4ɕt:lm/b]лzyRj6e %}Q˽tY;ߗn,!a ͎|6>"˛c0? ȿ1Ze wiqne[JfYh|`l-Z5NBIUE8&oM]t&O钛s^04]44[ .T+S-, !ɁIh̸dEYz{l266d&OFM RZiW[:6a7mz7g 6ѳl冁fq3`wNYŚќ(% zA[nit<ܼE7G*(Atr)+$JM ).}o%D2[̶fqJ:vF?7 팍jBחj"2eLWX7x=p .켮r2lCa(Q}#EkㄢQ˗JduJhW6xJE X+"**C# yѽ<ӽgV>4Dg(r:xch(s}"ESsE]Z5  4q0;cD>*Dfv]1rv}C!uɼ%4lgfx>pzݨ PM`֜Rli*Z-phأAG9:Rw~Z}>G1 _+M$hy8))zpԃGD dO8q84%dBE~ĥ]M" Q *U:|9> PTЛt[9^YUӇ43xUpJD{$Z(M OqphDH"۠gb4oz;z`s ^r}H #7(8.-_|T-TZ8A8| "hai) <1T <0{toi !.ݙ[k%~Et%E{O +54r1D%$T nVzH!*ǎB(xޱoP ,z4 " U #CK߹uu_&"M;tEvu 1&("E`j``PQ%=\&xHN##Eb&r9C"e`T<j[#T¨9;37.]ܳWgQ_ b28c,-M ZI0`0Rewɚb$~U6-j ÈJd| UKhO!Z橏PHk)E/9uQղ:GXٍ&>l8oTCZ/Ϧ6U5[}!OPJ7vLZS]ad淟?Zyȡ[D)4rrJHcJ9cp $Zi4E#)~RjBHΑHjPi0N ,44Jf,FWabg ea](z]xzJ󋌌[bP˙o߽_ƣkT!`$ڔ#+mH@_6}T_퍃M8~!)HY8zxK"ECqhտ;TDeNiØ W(`ĆzeYTi4deFs \*/IPVDD0HrD5X?uEÈ0b6F;ڲEV])x)S"HH54(0PsFFOQ55\24ZYk$PkN !dHhϣB x"^ IQښ6rJf`l ""U-"x&e'Z'9 &R,Q:rfiT Bf۹J^DD{Q&IEyPR׀ʤG=j B;D6rJqo&:kMqԌW'QQ/TRALPR/B:,XsB'ΣaсJӌ-.qGSl;i| euF>h <\zFQy;z?Qv`S%x\w(bm-_)jw,NoU餇3j9Slwc6ʦA)ai3RRn=F{Өȸ ᾶ\2 IG7$L:(ESL1.5.+҂F0n.r(=&lfxy5Ƴ(^CU@,5L𯏪!3ٙ1'yJP|W:$6^{ЧAlQWzQ&]<|5"a:O_`{Ld 6wk+0U8y_U_ŠJ}i:e"#z?h8HKU6JC2̡'a"mr1^gա>IAיIzֶh[fO* SnR#=2l? r %@ ږ"ڋ'oUz1]u2B28@Y$'2 SFJڳ jEKƋļVʸ=YdS>os>MNוVrɲ,ho}\J戧I 1!EɈ\kc8UH j7F]g3@$.g OUUR *>4ZQrwpbF8m~c/ ߬kiFY S 3R +C./ry.B?RgGQв^ؿMߴZLQB3mLNl'@rMS ){Nwk7RifIԬZ挮^;Z*&Dr*@bd-Q6z.z[߫y~֍EM?zP~L9aJZN7L-Le_a3P\#"IMCy3 ߞd̠cst2SٞPKx_R&kG{T2SɃ<"JrN?At/ǃ!qr|o$Z@J< ƜI(OKtYe흟e.a&R~:;?qˇԽW,̞N~W!g|Q YZOa`DЮ<+.r@L8baL㧋̈V_GwV7^/# :%nb4vPfXY_]trw , AdiO rY7li7nn{ p,擼~L`g3\>8 Y+#{ rYƊ< `=;>_+"Ӣ?qǰQFՊF%:w.ۏ޼zMwg᷷g߽ywF>{Wg~?ppƶ&[p_M]]c5Q'|~eC^lT mOjqzug074 oNSs Ed6Ǐm!𗡂2f25n-o!du(M )ԵFӂ5Po^@w;_G٠ևi?>L9aʯ8˓y6rȒb85u+?ebƻj G՟G#xI8S_ن`s.ᗜv0qَJ=cBm5jqX|AmlJuS'R#`r99CBǞDYHF3VL"%ܕ`, dPhaZC-6Z`[>E,5`\)*+\kJ=!&ji.*`/  &C:0LZP(W)ES/ۋE,ftCuXtYd +\?#XKE6Nrmdҏއ^soi{El83(fn\VSms_shݖ -wV1ܓb߃5 !Arx}M5]&(R'4J@fVDV:7!e=Xg G X=09jQڊT{1jwƫafAi8/nO\ @dY2y YJO'=q`ZjjTFΛtU&˭GVJJIR[@R`{JV!Yp&֯Bd?a=76"B81 LgSG R4jvDnWrV g^ N<=e98?j&`% ϞDKwq֡Wh2(APst 6It"bΎ'%%% JJ.%d(M"Pt:1K8A Eo|R:$\UTl"1MP *QM(rk1YX9kXs5H{|)-S%1@Sx(4#r%i -lV7[HStOWOW7<t٬'K|rR}pfWMg?p\&wW ƼϯU~a{qǑގ[^5e'wuLS*PmI4V7ߏnn8P@&xS|!\u0!Z-4 jBKO QS{Ǒ+~qjzf}\6]zʵT V2L \ejWJs+ A2VEp亅W9j6\…|_,ZǬ\3lunHn` "rCW/ph%e%V.N r`foo>nMɜfV^ l/S뫷GP;F.t[dBקG"E2RH%xbJ')y`NE2cv̽8f|,[HͦMWǦ `w։jV"okUM,:ϷZNEݳ&+NV͏|ad$X1-dr9i!S+eM Jݚi2+$p~0pxTs+!41 &{8nZFE*Sճ+ɕSɖ+{=l7+r,,~3ԛ8_++rpy~q3!I*V6E`e6rެLKvm1X=ol`閃6<2TyGo~d&Y*DG3b.(&\}Jo xeOrEk`xM;SpP'E }n3ؠK_M'aݵ8Yo^6ivh36g$,PB:i80JRQk"tcUњF}&5]ݎ2/7x*"B1Fu>vi /Ldm2,dSe5~J*7Wc^լ=&T ⲜC:p+Ǔ rǑ=֠=v}Snug`l͊H[dG& {9PT9j7e ygvs㵫3$A(2WjL>f޻3jBЈZVHAQf2yU8@ڻ5Ic_fضy)0g3g 0ercMdɑ$0}%[7۔%' bȺHhkJT)CYЧ&\>?%u)'p/ko~ZO=<tU?=npsnn sƇHxsYG;^?9ތhU}_:4 m$;*fmHLI:s [&#g]z6P?@2;tiCnIʃ*^_~v'gC=7=2 eN"NZ9`R NEoS =^oI^:.A[f>e9ڎά4q%/sd+BXFFAfEV:-th{_\D8HmY+Dx<~Wa ^ q:CA$HX #!V*@k )FYB;/^zqbkY8'/U 9@zsD 3ij<ZA@r ryםA5UBS`1J dc9x֑Apg*i"ZTz;ޒ#lCΥkdG `"N+w㘡KZ^1RHe(ݷw§,wyH: 4BsH\JSD hـtՈn"՝H-VUojFFkou6RB2-2%93'h]2ށ~8OZ~dzaX9LLKw~nz4jF#6Wʎ ޘqdI3J"ҨL_?=?JVo@.;G&׸D9DC_FKe<1^xBF]o5CY91 AȌrM4 ~V}"L SuHS>͒roϮ}kHfD#oZKOzjI|jEє e6O7V\wPjȼ;[yYiHu^repP0Wkoӱnl6HL7~>m5j6Fl\{eeyK1`(2e%ݳw՛l~8nUw?䦻nzV2KwϑrcEs3&ʎ?g ;F ntI'>7^>KR?׷y_)?{.7_U40oAg ![qkWt=n=M \g}uGp߇G3=X[ $廳n OyC szAޡtX 6[s ^yHh~ČBY]Pu;Uv8~ntzfC5Xl`{n5,Rh?ƒ)9s}="wWu>La.zWwhڣPVeB3.yHo=Gg[SF5D _Zydu!#ve@~M2ET6<{Y\u(cvu~KyZ?,k@:r + Zl4ȭ_7`e|=n_A~nS?~֦mZЮ?\l;u$Vc Հ )#܁tU> utVY3?pԆTGTZSIQq4.8j?P'馤^mD2td\ n<)b)7zVJg?Yl1km`1 @Wp$>e;Ό5edmzfoN)5~ڹG͘4zѣoN֌i/_]QC5̴  _;s,@WT_V ^M}"Я`VS/X4/4>TgkӠ+<;4Sj xIQ G+;P2qu䒉J&N"8썗L&&"ˌ;.Mf9X(E[ ]Ƣ* zDb eHч8T,%g29qIKT&]VgOYO-6n۽ A^zߖ߹Nx8^ߣŒxWMvgDg!evG׳UxדOC:[EGl}kk_xKĠ1JyGZ7Ժ}Һ= `EczahRˆZ63mݿmƻ;=z^j]FW =>:Oh{أ;=ꎎgihUzK:Q%42/?%"h!*c'j5qvOT0j)~Wӓ'~n?? m=~UyT{$}ACD,i Fi(c%hY[YN@G+VMN[S!0<-6Ѭdz0hHHնmajf<ʶPup4grSz܀ߎ?<_;hL lL)4qHPM0.c :2M :!#s% ǡJ0bYh I0MM4$#\2$. pDFKy+96gAZ'`kiǩXmYn FKFfI 榝ȵ3HLˎW;=̊Xc Vq LȐ(ѐIȢ(lKId>Dx_Vg=lI}i:%,T,b5xE4-,bgiЩd)hLQ[s4bgIۥɮE$BȘ.wҨ@r2ؙB@dRs#HLZp"I-"~DzՁxtJ{մvU]ZQNxT&d eJtOTZK- Z$OLT&+ɋ%!jq*`*wcq\x$Gn_Ľ]JiEno̷V5,5iDw҃@;ќx:EsB!$qg#1F,CB:2W%IWHGǺ:{(r+%>DO-:yR%B+ڃN I\@g7 ΢`bĥWdPT"FMXsa%'tjWi.e#UyxFrQäFhiB8omTC '( **4>'!TCxb'}bLb0ʦA+ *Ș P +d4źJ|dqjҝf5]nGFr a8"kҗ|\; ) 6G> IL8B/ o4/'RSG8:.8b j,EH=,LImJYƣ* gPA@:D2ex* ]2F e Y( 0E)c &E >7s= |/ΪǃW/i7r^q ɩ #g3f$ ($"9J<t*w䝡̞% ؋Krێh@E`k_MwW3ZҾiz"%Iץ}Ҿ[~j9/ig ݙսg3?ɿjk#Ȕ{\CY5ևlBj's's'sqZxe$3Du^kƬ`Jw2ZYGY&$pZoNo%rzb2ݭ&Ξ9FdžD{鴗y&nvA|^7BGo15fr9mDf_^`8&zR+ NIj)i~@A"-Yg" YIsS,EU6xqܣoѩw9 Fa3zn*jiRjWg ^y\2 w^hbWtasY ryT`eܾՇEH.&k#xQ&z,AˋS摳;X/͠]g᧳ϟp4qns`צhw_O~ f8#R@fg-  +lSOL :^~;=oA_쵌|ك}]WS!Z-mIi^D?`Ke)uBF""]qg:p`ASEÙTeLNGb<:xˤ ^QrtvPE@7KTj,n2>rr$O =gʚȑ_n;EaٝٝvCÁS"$>&Ox:(*Z!, Df"~Qd'ImL+%u><23.B=#<ً"er@v.2'ynYZl$c=AL׭f?_t퟇g}p;1H5x1;:r& "Nsb211 ~1fc-6h!mB=nxcGY/<7o/\l8rrI5"xcOx 9xV L8FSᤡN!0x߫¾,Jdږ(@̙,%\fQ8\j8C|E>J0=\Q\Qm^F0qµ^Q)&1 NUZ]Wz9oT#Wi2kwǗ7vz'>|^}0GZTrU[;y.5nȓ:+4UI6BjgR CYg1{5:W?Qr(g~7ԍO7H5?zd06qȚGX_>$7MTzg0wZJՂ"qA*(»l0dVmæFVQ5rr&$Itr$,*RFej"{ӎ/PDD^NA^y:ClJ}g*hNxW@u :RjቩtNLљO9TᆪңK#*>ʝP+C i K <a ܙ B9UA#dQ%1x cw9<ĊV?{}{_EU3//yW%Ć0Toc'7؇߽Է |lލ>ƟF- ,[h5q |wfz݋$ffמݾ&}4_GOGE|u \qƕJ1ĹlJl\TyC|zg}gT;54ϕQȒQPtD mI7d zwh ]@۫Tl/YC|HHne#BZ,Eb&}MvBO >Hꨫ\} ox(ʘWZQTۘ,ŒXkx=EX/Xw0_PlEѩ+u3f]vb@ѡu(hCq)rh]ɮZF}ֱg BwnVY 3rBQ_52yAS@^Nϯn/j fbzRȴZD-/íͿl^vQj$G6xxGwsgb<@Y`Ï7 }(K{{x DO'}D_~/?їO'}D_~6`Tv[Ӗ?ɖ+kLuL n Dnήߪv]Vup:a%K9Ǭݧ@i0Ắai$1:7!HihDȤEY'dx@9eѦㄡ>P4cԚH;BI10Zg;K]ɨ~pʚ4K2@z3|'>[; S}blg-7LZh\sѣz?sKм~Zs>b>Ŕ\q<ޥu~:i2-ٞÑ(@9H+Quv $v1dSh SΧ$myГؓ߬T&4\ڡ-&$ă @ 4.Ah0SaT 뒊kx%7ً-ݥd?Ka-͞5 N1RAIAFX/ҁF.X.Yq2E\mNԈha1Q3 eBh`ISyp8IqɵT9ѾD-&'8wzlP-o' >=ou+\|ytn)qa9G\\QQTD-pF Bb6)m$VM(IƝ( )ɧ,(qV1e0NIXAQ-+!HFbPW)4c_,TPXxP,3̸D闩 ?p0}&BЦH;4d`SAh{ƍJ׼nZ*?xgdw++JĘjʼX͋Ħ$)vldpp\e%Gq3KбdLRu7Bt7*o?kI|^FM#:/#[0 Oȃ:g rGƚ|s3ޑy'||XFbMLCX+XIYE VP": pO6PGC*YP4sEĖPAFJf/hl[oJ.t7Пe]vzB{S&ո | Pze{# S^+,r.N[K]؊ b !Ux#y ?|߭PrBJǝHdU6)JP-꥔I}|^_3$?q tV/5Ki*A]r/5DXH?:{hn>{{L,c0jGM,(kJĨ6xΨjgoWK#gLI.}Qv*̞T$QpF೷-X+ e(g,\ erD @څ=N^?宫Q7C@/ Vjӝ#yːXIlԎG,()!)R>y]fɌ>Eô,h&>-m])m1,9!]@CN R2F#X tL:R;4:&S 'TL d)_G x8 A8= S.(Ee1DZj~I^ra558haR%u2?y: w -AzFd "^@hZ4*v N8Ha Tn!6qF)˭P\IpN(sX$h &_j!|Ò |Nk~v~(~_8qSDdyi ?8^OZ$ͥ#t"r4 1 H:ƽOby`Fӟ?rgO֟/{s|6:? `dE-O^S j`d_JItz:7>ʡ\G'QɺU](50|^qs(UrRٴx:fq/'?^MWg>11g\O"{Wӱz> EjϓqyveC!Ɩ_uS3Yc3+,DA)`(|P?=ٻ;FպM6Bt5>? .U=Y:Op6@ܓcP)rMx^1}׋g:o~_/0Qg:WF`l 'M$5 lG؂{M'4jګ=iJu5iWv/"˧Th{n ɧW7r)IN]sju)|ٰ̯Q*J~KGh>ڗ"D 4 Nq(Kkue4bW8 !Ś l9b1>@-92l(=C-r|9Sڐ cYK1k-ة ևp%AXN+;0JxbxX߳62z=0fʍVLw:VΆi?6L 0k݆3 q}22O^V `RGSJmΌ6>im"` d瑱dtWn[/1[$˖ \JAݍisx_ nDhkj{ڹ݇~r,|X[Z.ًl;'%؁H:`97^r6O79Zr;AfxՂ%atwꛀU[|8jDmpTVA=A*5;Tԋ\RJ;qXJ@L\rmP&qxKm6rܤmݳý`5,RzD%"++*gX˨H[-"{\׮$8{&jtHqh͑r(X^)=sEc9D\cockXA4cEԸ9vhOH+v&O/l (D(\!([JL.pJCIwGGH1;=%mULK." a#*,D+Pk77,R"اp:."!&"!]h6 Z#ap2q88$-@Qq5yhc(aL8O7$W1͈Ÿ/qUҸ )KRO"@#GHҠ=eN3e R"`yo+Bo_BST/ytϥ,%-g}7ZKzD 4-MD.ǒ)Q'hJT.$4}; 4 qW0>y/0S&wQn9@g+c]nW>PU&vymbD&A5;Q}I2C)hHpQ0.D,}DUuwgLN!^P4^ {+"weVjޜd|5->z/>߁I3O|r$8ؒ&FAlQDS.<Ҩf&ZeAc,6ʌ ;6`#*$;A ,iyF+x% p0Ǧp gGix6ӌzEw0M&urU z$Y}$9SHaiΩab#=w7ݞwRy̙vTcоެ=dž m-F RV+fF휉DAei,60ӛw7! #E߳bG7z }5^m響Ku>S* <FP!0f ["63%&;2+x$N$B;  q1q8be&Ǵp 8(DS.:V, !Iq@`lҔ~wW^խPb}5aZE4xaqFX*%f9>v(%f$hG芞|%afVHe&IZ|p ?ks ;;Tγ_wY4`淩N B~:d_a8}wd"Lz>{2TZji93lu '/^ m1O'ô<Z:\'?8hxOK{I>(Ul`1<IaһypqG|Vf QF~Odr-c1K^\6%fDYXU /R~D'<{҇O~d0.~t*aµL? J.U23\tɪ\+: jmϳWKC,[q/``S*6Hxc1M>γh2L#ePLjmkUr6A?'EN8%ln_k4{7Ġ39.MT<{lɞ!3:Q^ =y%ӅuP)<(G7otO^mT ﲫO8:'ʦ2iEH+{uɳ0썓9cKI`QcҺ?&{| ԻSbJiJtԦ>ѥm} ȋ&k<ư񟍶OC3i0P1Б kmwK89ϣq$Ue юiJ_30#W;&RIMwPTrCb@evӚֺ%;& `na w5 PUa9'?P[UiMD*74TUeOv_mxt':,ƕzXvZą/٧(%WIu,%$As(K~C >Z&JiT rXB .S<>"qVh+19qURN\} SD&G#\񱈫D-U.Wߤ)9 v1<[>$X9Rŝfg oN{Jl=a!E4o?ξˆMda4A SN9AX*8ٷ7BB՞ɸן잍L`nwLR)' fVfjLWo/_O۪cjk KO^Ugn:'`LѬEj6usҮeF{77 Wx_nO<޾ߴ;kHE>ג)E do5ɋl7;%2Yj:Rjsfȵ1O_&zG[ͬB6fv]p6<y,![n5,gNcʠeYf?{7F~;`b^3Lf?؈SݝږZ,QWb}Ssh"^' ( Ǟ958',Tq[9n+m席rVq[9n+C%ɝ%Iw4}TJYI#kJYI#+id%4FVr{Rp0O()V?\SI-ҎSҚ&JrְW席rVP,+m席0Tq[&+m席rVS9n+m席rVq[9n+m席rVq[9n+m席rVq[9nkDrVq[9n+m席rVq[9ntX蝕J_yJJuy`K$ʍM)k"Si%uKu'@T ):Ǒ!(3s&EwȃTHC=2 -!Zbپ4᪄ GAF._H3 <*>>9ފvvpuZ;9VtHȐ1:چ(i&(رI/}uՑ^ڦp`Kޣʠz*C@4FY1ymfJ$(K*! Qf@9ƍ΁9B̼4)]1eaBv9+jhs̙%'aNڛL:}ٛW߽O?LO>N{SE#u<<}zM\,IP )WK{,k燉-#jK?Te ěwUIdzUn$gS6ie"O5f# Njۚ6u]96sT (tС]svuj: ;fyv*t.FMq673ݜ2Cί/3f(Èa YLc;Gk:mD җFЯG 3NY H,DEfI`x 89ˆLH(E1 >4PG tcA#RCDaK02b6{Iln}pz[V1i(: yF7X2&94ň`Ɋ M4ESc#cgKpmkq@pV pK{J )PXòJ"4 jOd]qasa1sa>zPS"Ytdm4'|Lyr˖KB8Đ"HFrdfRk.>J eCK℀QJG"@,-X&܃YnYodS[\+͚BDǟr]Ij%QQBǁ0v 1e/KgwPV(™_{ƶD|kH_>떏%/BGۋL]qH~㓳ب .~ZO?OZv+bֽ]5-sdԝn[iX*㬝3/ET^tٟ{@ پt::뿶 Du[rE.JєچEsj?݊4nb~vʢŋZtW3gYg|_P׿4.2Γ2=R7#5"&'gšNEvl2ۿz!{ݏ_igBtfˤi.7x^tT/.th'Bp#[QP#S7sQ;AtYs2Rn8G(gYHh(AP]elt-Q)rKLYfte1*A>dQ4Y̢3V]]==l;O~Kў}\Vc=WL唏HV[wHT(BՓZ8 .dDּЋ#@ %aSAzP-KnƝ1j jѥ `qA)mmp r'56e'c45Ȁ)X)5lȿQ%`dpA$ rVb2H d.`p0mp\xEA=eggq^.hǣ7n LAL) `\@ʺhcЌ̕$`> eɥ͊8{> &CR)F.YJB9TDFr- 5=8-v8+.hP1mծvL:%XH2ȬrRfȍH%.;^+Aa5+ rE^th$d GQtubjlg=lEbbӏm-"jE\$.8`bH9n UbNf倴]솵2fJKecLV\LH!"gp+b$OZ(+媔`⬷^u 8[d.)9lkv]vqu]m GmM@( mQ#$0 [`JTDV4QFΘӹnyxROǎ&&&;̇Ë=&7,߭eN- mIvx~DR^F6I߾WN,ܪ>\z_K=Ovag=NBxn3zdB[jLى@,Nt˔9`JFz|%yݲwvBC>MdHyr[C2%ٗk-ы >C =68;Ϗf{qz4-ȉ壝4W gPeM=Iwy 3ҟm̛īWma S/9A׷]@6O{ӗȈ-5#zqN6sUY@L8=im`2L>/:zvyqpUVxȾVVgec4I}rr˭/fdDğ=>9V n1"=75nv8NO~_W?_o^Ѭ's41 ĽE~{pyW4j[4-AmдC>7i =>\֡=˹@R|4p8ۛ՟t9kv|X4M#$6 (p6[RTވǩ5!@b; mEmY]#]G_\,v[}玒1%w U6S,sfSI( (`CG6iϗ1E#/K szABɁkmȲ!, -A^;1v٠'4ɐl%[dEeuKG=u֭*5p1D  84.mY@Xk5Uz{C;լ(Lh01LAaZTC(⾻^Q WhW,5i$ &uD9hkcN&`LWdz)e{GhH[l KڧZ.$z1F: Ejd^ wE%N{QP2Vٝ# SJeTaW˕g0s =9ͲoC!,.w몾 GkMn7}a`0|WMF\hoċlJaXPZs Ƙcl2D;<"kY"MG8DWPY}UOBV4?tԔσ5]QR=RB_y7?WTt$ t?M|pΙh 酓r}]z73}w(phru`Iٽ~3m:Yb7,߷_VEzf`2( K_.uvVt7T8b\BG-@/b{,tMٲ]rU,=8ܻy(aveoKϋ>.+-6]|`QQY(3 %ً7SINjrE2xv9ڊ4֘]WPǓe?E⿞|ݿ p1~R8heVDB{a}MU#G "y؆>+P[WpVFNNVq2,¹ɒ`3+CLi|W5b=/ُKE+:lGEki -bhWYji5.Jaq޿>=i} o {i5 ]4^Pnے5ejsK;{ζ23H[%#:Jꌛ*ո+n*@ v7UBMTPu s;CW .%]JItJjUr2L\NQW Њa=HWO4Eu)fASLXw~wZNWz,HWOwi0kJtffjzuP߲)it`&:a'JvBj]jߪNJ ;DW ]\dW*tP2zHWu>jfgms>b:SpWz㑑ԒpTru-msRkiUmu~rzzUpEڿqv7@',wMVne⽓77l[Fz"^ߝo[sץh4L)'\c3IB&m_nxtA;+i)y_e?7:5^lpn]rՆ$q,Jf憹r˕0]7]7mH]u|m>RX lBą2ۋ([5mDCD0]EMiD}QSmi"]MMk6{D,4j%Zi˖,R$.! ɚG(*\i }8֙a:3BO3zfX>gavW˙#5}caUL.u6̳l[<͏Z{=mw`]ׁ?o'rhXԖo3ramhKyY?▱GxP޲=I+*~A. A# U% ^JRXc%RXaU[&sHuRل Vc`"2ҹV sb#VuH"i Bwܡ(W-Zec“ [ ڒ? ?m_\p464Y`EX3((dQfݴ_eN;@#hlhU^mʣ h*fB"&wWFs=} ry5mA-mnmnp^VT%ìv&m}l^'K`t"p)D} Y65gixrcI3n$?63'׳έ6_34W]@oyjvKBoweZ;ZlD}WɓEXdKWQjHHlʷD˾Y mkHtX%P4Mдu.7} oiv+\olYY}n ѐSW6d-ҍ5 p[TƸdKuW PmN&JO;4Rtu\ޙ5 "m+@:+bCt1oiW*m?]%"]It$"D|-Gt(Gztu0QW .Ld]B+ZYPm#]}c@ݿwY?Ɋ? Y?*./]ݚ/0Ny09[w#O i|E\Fp=?| 5O̸_>yOTRYe!֗J*ULcVD..'A9Ol7563wЇ??|F43i]y{r?Eќ+5+LbTDQ`鹯2*(\\L6d`lq&/T@Lח~\^ۑ/l] ژyh!l?oaZ庺՞umݜ:Jټ` T^q~>ꝝY)Hd$H**`ƨ ר Ÿx  RU 0WiP&fr.Fn_SQ 1=7CQ{0rH+6^Ԗiz]4۝Gld{3hKӒV@wТ)^ /_avGseEsZ/Cz[+iߚyβ(IHrtZF,n?YNHkO'~BH"/7T8Š5qd@ 5X'dTqûRu+¥`)PkZ] 6Rʃp-14P/5^#l5h)@cekIOttǼSTDDA<3 ,&|BBu rM-02\B7U4K, f .! gi>}eJ,eo~ l<D"{]̞c4M zj`xUv}n~), g),4>pUP>bH儽DzGpTRڸ;gE$@!ۀˏJS(Gbt@ u`o0{509[)NՖ~-;Ԥ=|sj95YCN3-Qn#\@`oQuޓVMuչU'B!KÚ!rT3c**"e9A7:eFH:o z30{-#chnD4:?Nn 2AwcŞRm3DuJ q\ qI#F0S"$Rd-4AZX,vGa ]`DʩR0R[)r jcp6jb R϶?4d@,[W,14]; _Rc8Boּ)en4yNm;7`8\GuTu֏|C$6iNlYGYy.R*K+:(ƸV TJ!}$& 95FhAl(GkQE(POp)%bdF/'flzטVi krh +D)oVC*om\^Ml0]_ FӾ1zkcJR LLj'J߬!mdP ig EHn{-q[l7y0-8-;!M“H!@B@Fp@&J"25|.Ȭ[I kN ȐR@yThkB$^ȴFօ$He{ؚ8aEb6b[ӏZDղETE,&g2PF8)0sL8Y*94*Tx1Ѩ\%ӮEDBHwQ&IEyPR3.8ɤG=2i[E!}D5q[ī^G.N:ִvѴlug; WQ/TRA#&JL()Rr!uD$  Ҥy4,:PIp-AgabL"&'x^͑[W6!:fWڍv ~|*GCs*6DfCqy6ٲLzd,%t$lQV!߲N2\ T_&R- #yYFL ުIP-(;=g?(P:C ܳ)RvY#wyqm!A`es8b#,jPI@uH4#u o< edqq,dDK VH ~[l3|HG﹩<=S̕\粫7G#> 9DD5 5Af},ALG^s#Wg <2X$rӗO-VNu'> l4DbDN fSӧ?VXD3r(^q9l8jcg},uys}#Ͻ_ۀ AJSkM<(Á!eFL@$\k1Pg].7ueQVoӣ|%cK1\Ѯ;{._>݆ -U)H -dC3a"mr1^gա =^oI:3]yp(QB'#}#D-hH֖)b(ԶQFf ]|oتwF72C7y7Xː0 J!}`ՊxKƋļVʸ=Y[ =BP?N|k0:{Xh/q[JZQK.+S*~%ᅓ_; GT#ȋSXɍ< GgJrN?I?>]B::)iVeF|[-\_y[]xu9>x n # ~Ӡ֫e߮, V+{vABFN 66@mV!G̽gpyѓ=is㨮7+#:od]ڊFy宓c$X|?\W* zDQqˎa Jԉr=Tǿ/o^M˫7RO_/o_7i815؃Eap~nmں5o~ka[קjwl =|\bn'O^ wIm~ɛNYvy;C_Aq1ϪTeoP~@i]MލD)v~#G&]|DIP2rj+ BC$iN-D#ws"]G{anj|9U#FE :"Nrf9@QV29%`LJO -CXN+;88xĽ=,su$]6zO:|ƇId0J۳!vq/d,NOVe)9kdilH%#؄.eDCtM\#6D]iBtMJS!DBt3|3EbIih㸄PZ{H,x#K5/W:XIt{{+u>7zP$#>1#eG0(6$5(B=ᐝMM˅znlDp*)cP !O5(5_cJ*k5q;"ܴB_Al&mlx9Aͼn`]̟\ko+wOq[? vLmScj#qEaa4M C(9Q:PZE`ΞJ\PPrqA%,'<|TFi։YJ}t "g}I(pUr(Ebh18o3 T$)Prk12lM]g10{VՙlRl޽-($G|k s>e ѱte]zs؝ɖJeͽYU%K,9O{V+PJ067вͻhv;{\lYz!sF7Z󢻞Z_-)yLR2⬯`g,=m3-G8EM%yE,YjT>W։^jDV|r~ֆ(΄P&X `e6otX2Y|UϛM2bIutRS6$W{oHt_TKդtH!cw Գf}_1ĆϦuE;@\=kɵRHa$0+\WiH" U!*]},^,By>;3 K)5p#ҁLG8x0#1Q .ՄG=BjĥTL$А#vf IarGb% h> њ8;}u}"b;/D#8A${V,SY3F ?6g>/lx@zH8 0,@\2P2 :ܐpNhc \JSjgx1<:hz:Y"erG aJE@[cq9$-k5^ # ܑ$y jhK8fE%)x 3Ӌ}6D$)Ö`x:FhA#96q)5djUrVdUIQӚ4oG׋kn7׽V /'Aݿ- oq5+DR_'OAP=|ːotR!w- 9`څH(%%TfLOx!CѺN$$T!Tr5YdS ) Ù6c Wt`Mx:Qa@n*:g>rJlUXDD\Gt4HkϨL I1NH ~EkRFAUL^uE#JEA1X@yj` )Rdc_ظSk @Ϻt6t\ GS*H`ABve,&lsPb=ZOhS rT9;4E ~AB3 $V0YEi1|9D3EXec N,֫`ticV{PTXַʔ|4:[_jD93a&r!QcGsmֵ<#kV ݁O4 m5m{p2NU(D:]FDdY6lU795a@NMf"ն#;`SV6#`M9LV9al!>z+bZ Q[u.\=Fet ZA5Qw܍,Bw,wRH[z8̺Vm__iSz:!AL3j!/Pi!/=f"8z3! 60:zg'rWB8~'IpR;9tPue()RId75==3W[@!n{tK! b%뫅2T*ddxW Kmǔ`m*%`նkd|$l~di_s{{?BE4$ocԠlC]T4[ѦMB~F !bLup4Yg$J:K'WwWҾsѼhioOM|orON?3%ʹ^y|uME& ](pcw1J\?\zw7q-T!R!zL|L1ȞuK5z i:ީ~{|yZJMIrFV1=)9dJBBD.^{uXUTߖlS7қ-M2g[F[lFڎO2^88\{/totq;،XIR SfjFi)WCƜ1Nؑ3*~P rً(M1{H PPUd,4l_ӹbi@1`Xm?Z& SCs5h j֘9 j-ؚ*{Æ(6JވTWTRcm~UJơXc[=[D-hox6ZPOZCy֔Z4[5#xMHb ZD!J6gVQۂ`j3L dkL9 66wFco٠G.κr ٛlkCvF8kw!%V,$޹켷 Lޔ(L՟Xv!Pao1{`v|kgl}gb|sOYK,{3D;:^ULS ]lt®~@^z8 zY@Y#%rHlljȜDS`*۶B"[htIFX]zJ|XWbH"8|3otK5N} .ef y'v\~;}q 5V -ޖL++}d X6BT-3KsMztƞHaX \Ec7,\ 'o.\Z%dޫ10Q?L~NeNRRva)T@οu}a:j@r#Vc-KGbسs.Vxߑy\^D`ZQ6JU&T9jȮLbH%g՚BάXrqJNʯw_y,Od,PHZh- W4FEml"k%]"p{W?_` rl̀d2V$ *1UfgQuMu;1oE#)/Xfe(0yǠPHyllڂm^I<2#UPۮ_M(/KPJ"v N)R@tQ4"HaJZ|+8)ɑws*oCjx! #o1t!f滘(>O_< h =,'do,\ ͻɴ'`'0%.}^)CMfb濼nߋVG澦*W遽qzrDXTSIf5k$d9|]'ZmN ̎~V+ UUlr<]M|Tn^~BoR6}CQ2);^}ɧ87$˻$wի =˷:j硑֞~'o)ru.!QtviGljZs(5wK״dVs7?G~7xAM ~ٛ{k Qzr<g7w?RkF#v$p7tnv`0(bFlwՍww4vTnuuF]wb/:k^H؉(S~I;zr8\crRz'ۛ.Oާ"o毿}_7_~UOr9°n { =&pm~7 Z[ m 6z-^M}^q3'{lm+&|iZ7[C[3ͣ=-tj.q iWdڢg<|V:+enBH0v@`ݮ[m,s)EVtM|O2[4pge ]SZ+,me *N4v8L^7מCr:~:Whmc+(ÐTMRlm\!r)1I8)uPDP$8LALo}NqrSpŜsI+%`O -W{V+ҭģW[~|y]<:^*V .KqYҔ]wFdS"C~{n7.Dvsc# SI qYxFk=c4:9o$^w\%sT 7_Q]z6^rC#۫o_zSz#cB0iA+c4M C(9Q:PZQQ"Zٞr A.:d(M"I:hGG {7>)E.\h6J&ijb (w[ H Dy &Ԥ5IlYam-]eU+5^iT}f9f9ﺽBDI<'|k3ν~#$Ul]bjuY{NWm3rwf!,57zl|獲Ek-tzG˻w ]yߪJt~OMVZ߳ڨZr˦]-)yJۚOP *m0%ȔL֟|`k ۪j7΍Δ@(ut<e }qݝwNw ݅hD!(TlD ɲdP"|LTDxފ6J)NG O7gECaW-vm3tUӺ͚xJh_Nvr8 j.bv_}A-#9JP)XNj ަ`*r gj z>P 1 者kQ1{sִ7۟y-v#pxBR u*{UU> Dȼ| 0<0h.RpJ2Să5YMqS.d@$.}b" 43`LL *qT9* Yvqm:)h20Ȇٖ}ӒKS$mꪐ;q@L#.Wf/Ds(*/~gyNꄯjvۚ%SZI:i$C&/uyz%\2뺓_ݦ[^V536|ʔ/L#uZ\BC/vE5vrcU1x544=_<Ӯeh1XtkifnyrɛmjٓH1TIM J*&4, Tv̴b`w̓H??O^ۅ"2u)D,{=8N4 J5%sŹ>*xo< ^lEXl\J}Ӟ4DP(bxԍUlQ9w}iM%SQhUiw:5:m6Ԇ(Gi8G ND+UZ}l L\h1RHp;m<8v(Cd(<\P`HE2oW(n|)=o)1Os{$?⎄֪m W&Ҕ S\(o)`\!J홈9$F០LN!NkB!u-!46pMN'$ CI*avyVatj0krDm'+;+EJ{PGm6$[e5lEgimD3'qҋ" w⣸Bن$N"!H|$1AkE~?%e:{vL2Q"uڵ,ԴF#׆ oCpYLU8s44X8{ej)$ H:b֣H0ٛ\RK3Ʊk|Ӆȼh0sEbE11( 67$2sLVЮc2hLr{W`MzWH.}L-W]L0\@R'xw\!U&C_*SL\@Ҝi'e np27UVUR^ \.@ `ho* q%*uTDԖK7ևD.VF-v0*\KOI{WHb7p/pUpTj9K'$]!q%#WҾUVw^b \D,\…|_\ukXl"?*zʏAAD~.ڿSlZ+()-)/?# 9?nMҜ/F~XN^Jv=#dX~6k^.3\y`[{~> P:[ENĖۼ.Ggg[-c)yw?~HR gsSQwOFd2/IWLOX6 CNKw^VlJTleii%s܇a9"!DV'4J@fVD]rE JTN+p|cՊML"%ܕ`, !V6ٔ1DȴamѴ͋V}wR1-1#XU&WUH:W5UV>8\e \er4Np JWJzp%!T2?^|H./ppAzp$'/>$X(+GEXqd*) E}X!yȾ$~ʖ#ɳs%˚{F}/ X_sxY-ft1UqI k7ABW6ҡӕU]#]- p+3hhs8t#>pJw{#.p{ 3n(=1@Wi=)k +NϾa71/ mNW2pL[҅פ g>`>%];[5 -8J+q)4mh:M{MJGH!ӂ svb h)C+V:BYtAt - .R>]J +]!] <,3-G]\ueh͠]*fn1pp˹`pi1 s'J+]!]i[-I]ARӕUʢDW, sвZINW2*$qI8RX ]\~PʺCWC_?n]"\Y]VvCFWy+]=uGZ ]\Е}mqc+zpghą3GGc死whwrв[2-ڥnCw`Nɏ<`ӂX(gYGyǓ62:ȇx.IN\-W zY҂i1K 7-f%hC_Z0K G_]c RttJWGHWB!@NЕ2o+CɺUxIt 2]ҵ* JWGHWJ- pZ.>V~mrv2a5HW{DWr֮ ./fƃ_JY*{DWXsg&] ]pte(iЕ 'OϮv,|ԎpndmzwDkJڕ2 e1tepSZ ]-?t2^W:BI p^]w+CP1ҕ>8ΓR|{ )Q"*PwhwVuie)4 Iӡ4P;Oj߆)%`q1tepnh;x2W:BuAtbj^ ]8GP&UDW,3hp󺾉 m9CJm?ɔO?s)}M5f=qܶ|P8Rg+ī?>.,@ED ; ݲef5x]̔_̏@0?oЇgޗvӓ\>yޝ~/)`/txg59J^SEJfA5DjQfLbJIo]7w<!w3Wϯ_\˫}.g\{fUfu5 k kBʼnm1uz^5Z*S\ЅTZtEKݷ Ou=?Q$>B buL.c}U,(ŗ*i%\ⷢqhu2"Ԟ\1:F)0y4gf2kɡdQ[ѢkS]]y-$նXSC))w?e2IPR2 29cN"½X8hƜbF߄;͞77Rvo&3gƤѷ4"C9m l{wDSCqiW hLƬPIgS kD4.iᑒycjo^w.IMywQi]T&--sEWdEbs#&>QmeĘϛ[!ͪFF}/"sm7CNnjZR_u}>e-9L)TFFVRGZGH}=f~(\ن6Gse`] VKFS AFKVvu ɞֳ׼Bk!Z5!{7]NK!Ee-%1V"K ߊkϒBcCQfE yu}_\BjU" *}-ȲnjͷC٧\+K3)Ɗ)ۅ؃UW ]^K])@c,yk&0` /K~ y/Nquj˚ݹǏ^^{ìW m|`j"h2X 3 b|DPT8xi;8ePMn%U!Z}@h̐btLI."'v0_Wq#^ 4I6Lȼa>hT=Lelg&ZW}dMTBHPY#6H܉m uW_ W{USY߻ޒy'nWvMl4F8NGo3dק?b%6\MQb+tmHi z<%iYB^"Q9m9wK441\~0e@R@"Xc:L"Xjh)nWqKnrqM{}~vsn4gRN  ..n2Ň lы mdEH7 #wqJZ4oZKg+0煐fQh4v/fc${rvQ7GC2*3,>wJI y 8rAzɵ#51CG_sְI,LАza2}YZ@z|B3 `oY̰&`r?y#D>'1|p#n5r)IcX0pRft6Hc@sᡍfnu]Hkփ*ȃ\J% b2Tjk!#:Ԣgc A٧Oے?joE@󘝀*CNAPhE`A:f@F\YrčtX%#p2{zZ谞!!,mJ]1Ab iJJN,l1˩ՆXfԤpI`¾:Hm2nd!wST0B1E&׻^/o姽v+v\[՘6$jNc%gzs雿uNd9S*}=|N^|_J v'9;ۤŅǨ//~rۤuNx{s~`@?mg'~y}wz}sKemoܵ÷~\zsv|W'Ehݙmܴf@kD>ƥSAP3|zU:J b*=UZr5VO%RaZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+ٻ6$Wcm!pce O0EIJ%fn[{2-5_ d1-T:A_ f}OfaTPgxߢe͋*UiH,b" |z9/g s6UWCWFz!rVyA$>@%RNz8>e02ߖp+BX4.':Ww6s+ᬪ*{TBd}mAEOUt y߿64hkrm6jIUfUXY-EaN&[6xQ#[F IQSb%BRTB&(!VfTl6|qeN(ϫDEY]El奷e`^'JNh O7UC 8pC e3$){DW] ZKNWQc"} `+{CWWP*hy FtuBWhx i_+t_誠*X#+C)P{,Io誠}Xt(U|HW ZZ2 ]\I{3h]+@iϠm zCWe}+@+:]R#]!]1=+ldWUAk:OWHWHWRY-@'RX9&l#Zް4D ZnecYxXZԟ]`IEo |K޽ZROuR#":#}V7tU>fe_ 2NW tut1X`3dWUA+E骠T8ptE'X!{CW5tUPZ\qteOcWX]Z ZdK(N#]YYlu jBW(uVbt߱ /7 0=8]RJ*J5߃8s CkCu骠I-t,t@&ȉ!kձ{T[QlX) KE, h]gi@iD>F6LPݗ0+QK6w%-WZOebe}Th%{1}LVX$ˏcٽcR va$k,[$A%4:x]]2vpezbmyK7}|?B2wÕGŅu\|[@QShu>U%1[?$֦a{i++-wc(ЍStI]ওOx2;tj|ΧƏVaz.AHɽW a6:5 ,li bAC<} :lvoljm/3ׅ /uݔa["`MULקpY]RA=;wL3 b֤eNj QG=K6sRdeQ rRk4q]̻p'Jiz1áx{,}YU<gD3FGyj_ar5uhR[yp]ۿBJ}pnx56mO_/XoO]Y.~ ]u}7yݷՁ-j8Zo%"_yl/ʁn~.{|]j]ݹTѢժj:^5O gΏ#J@1g}[<)q|u }Zz5wM>4k7PۼCwc-$igehTBǯɟO&_5s`i!5.`RmPJYq6xI,t>_NO)'gV ~ W8'W_)Y4$yhpn]OMPe|{kuMA>6ZqbzTG B=es1iKz\5BcV`>8UZ.D>OG-DyF3XJW"3=8,kO'>㗵01t-|oCBCay^x>׈tXM!DY<'OiowcfF.X+d"}/A<Cq295>&\0)93 ǫrgLpk̐X㪔3Ļ9f:Ou9+еu²ymyLYhKIgFAb($5R #-Q&b!1>mWtzaiᔌ!x$Fm9Jm (u^NFp ĩx~-HRO"4bi(pu9Q: ŗSwbIT1:_oBaF)8A?ԄLpSB\H4ȅVwϹL*&&pGmiDѲga!zJw<%qTlQieБxpADeH%xKV, )YS ؄7,M*\=uteЅ1j 7y e0d8D|0sٲy+:PP>ۯjèʊnz|{—%7|>=y[]lݦIo#ŭ[ ^s+yeOLݮMsUեNoPCu9hŸNO'.١(637Jݻ6W ~|$G-$>T9tq2(Ua{B"'Ι32±)w٥=.#!QYơ j,ȵyi&WHd 4Hp{t4D{V;0OU"bK(F[ _9A ]s3>Orfzv`ۛtRNzD)a҇F/?z*Yi@LW29W:Ӽ&\}pOVLc095ruAbLᓢnHt>gEO}r0=f;p_/Jٕ|G7fN/@[$e;<׉\;MX ϲ^ZmhJn楖KR'n7vc e & ,fzMX5u[]zMy9h6mg[o H&LLv->H$sܘkŢ1 +L΃hH1ܻUi&gUz+Z110R\9♷ yD1Fx$NQ΂R[b>_wڴO0i6O;enjvE$ڋM BV[؎3A x) dNw%[AƗW PjF%RR$,Bq{ Eb.$1&_Pj//BIs¿C^<)\"ȟ/{{Ec -@}(uHA)E8A꼾UÀgB"qTa=X3 kEa4Mc,Dg&r9;7"G FE󭮯D ǖR%ctX[&u0Լkqsi:99^@s ?7], JV jɶV Jy"-FFRĥg诘gjJ *V F O_wǘW~uKu#0 &67 ?ߏq5W?ܡiU^4UMSeM.{]|XOi|n-ң٧#Rv7z]ٲ5+Agy1?$ UT}w*b4ܙ/U hnxvn4 h#;Idq`R;ĬW:|rVk/;^LUΎm@fc`*ס՗iIx6!0ʦP.M0#P,[}ԂՉWчrԂhm^vw_8N:2osZk{kWk&7m i1]<0Ҫ8L0˧ati/9hgQs-9˝R 3]_躲tQVaUnU7oqTAᨖTR R5SlEn0.aሟv?ȉR4gKrY{ut kr4tgYBJ)\$ e(c$2QSg|T28 !my(ɁG @VFudXDA/sGkh)$t97]B_A.mx^gOo,/`O\숉mgF0L%Z? 瑠O:DŽ'=VqjIG@HFV/U'S~+oLna(pp6|0h8.Sʥ;KOof~T a rQR&R+`;Ѐ 9s<<;0Գoj&7. foRpoEWqW{ʺy>,4po [0T>4ﳪOe8E5w*8<3MFE|ՏoHɬ:;EP{8> n>};'|25gK&,9ګGہ"g4`+$T΂D1}ofce؃lH1;=%me2$`(O fÒE*@SD:E!\2 DpCLDK9 Q umuF6bl5m.F ț,Ũ4mʹZ8]+|vWA>|1 : \KDHҠ=eVΉd)kSc/oml~ MyΥ%raVPʞ|/")~YG&쏆#8fEMn͡Nbpf6=,Ç}LS6S,:D  lIA.aqQyl 4vịS1ZleF͝QFGR*:d:ȹY⬯@'4R{vqJknWX)}:(_lUwtVt$eV{G!IΔ;RXsFcoGȎ}Nm;*igYI{Rcнݬ=dž m-F` RV+fF휉DAei,60wa7>-@71{{kSn}aPZ9ZkuEِ_>S* <F0!0f ["6`3%&$]ekhWSW 6FocpD9 ˴!*PK'cZ8xt kb)F'.#Wpf'iigtQwLխPbs5G Ep7x'xHQ"*W<YЉy븵\i9Bp3LO}zZUOy6͓VqIjC1H[& ZP-cNqͣԢ(hWA+GY .bydPJHЎ(=IHrЫm2&tV:7#;`{PSM:~_Wiu/NBԍs~; p2\YEv< |ETZji93lu cq@¯`x};$hi,yA˭_}aGÿ%bǠ2&,)}K׃fe.Oc]?R9^C.U\qXt'D9QD⸮q)q}dReSjp9|-6/2W@f4ZJkQV7YzkMtVdh ~eֱl]eֻtRvaʓ1~ض]Ē({=*f4_ƣ*hݮ HT_wIcl4V-`zmfK :J`e'Lͯk.2Ö2Mp>/=Mr;\8j^Sޘ(2|8Pq6hJMvZW's"D@MkBN8hOSR43v!jMHwnQN=^RN|cFPJXV^G̹/]F^vY}7ZOw_omz+n ۀltL!hz[KƓtd ԺVjbt4Nkkc.D17q`:v]bl L+pVeW7o/RY,mCXjQkP+ya t7 Y*cThV&J4Ѻ( (GuR:Tmc4=0 7m'>>0a-Q:戮.x|J3 Sr83e`n9e߾*U VD/n#I~*ctF^xG.RluXvvWo6nҿ5QZf!bQ7v0ܰ\ӛu5 Gmr_Qt ji7}pV~jΛa׺Vj15IQ79rh8{p/c h-'0|Ʃv[00ѶΟ9D@9XDsYhwxJSvx>t%lHE9JiI]WJIPt5C]h=bFRJqrѕuXt5G]H >#])0lt>ΠFH]WJɦjIW@Jqm6RZǩJ)}i]}9r{f;5Rc\Wu5ⴺG4G.-]rEWf=4pF&cC6R\FWJr}Ǣ}75b{'4\-~ؚJfbu Cz^?ojxռY*L^cfbD}ds@2YZ`6V\XZi]JQ9GK;cJ)#])uJiSוR)72ҕs>\7=\WJjnjt5FWm.Rژcj rj] p >] .lƮJ?\WJIji])f+ tmJ0jbk<@FR`OJqEWJɏ] e0eƗ+3Xd~j 348\88QrbClpd+&JqmEWJK&u])e,,e+Jq+u&+Tt5G]y[Yd~ 3Ҫb0?"фN#on/ 5⾸/MͶ NBR.mqu\-y~su^-]TJqnhqſ8yP¶bI8(e gLٌj).\F6$?ʨ GFH 87\t%PRJ EW39?+r`d+E3CeREW3ԕwdHWѕEWJmRP8QW1RN`C6\ ٴ֛uPZWsUi;4JqZ+%9Ƀ˩ujB&'JkjWJ,/GW~ϬOo.MZfHW(CbQ]CjHW ̜ +j֪Vg2n :\M8&G8`>Vܩ7GmVBM?6#] 8"+jqTZsԕlBHW 6֕l씖BRʭ节3ҕ{Jpb+#BLu%X:ԕRNAFRZQ,Y*e|t%ֆ\t%1u] jc̩u%y+ٌ] *%a u1bFRXl:BI]WJ3*԰a?qF:xi(Zz瘻t5һtF*]i]~&_|4|.R`RוRF( L>K@i9D:cl<0IoX;`gev1e+ vk)b\r6MF3}҈V6vȂ←ȂNA|v20dfJc>r:uf+2sNL6R܈JhIRZ*O`oi])\t!+Lm΢'U6f+ѕFWB1I|Bɶj 2ҕ,tgZɷ˜9u\NOFWjW@J)Kg U3㩱ȓj|E8\iu5OdpeLl"CXtuhփ&#] 0O>v5\t>+ڬj>:j3ҕd+ŝ:8ڐ2Ƣ9;az PS"պk/SRp6V\FJMiU9GM;{3ҕltsѕNi%٢ꊐ"cF`Fe+[WJEWO+t%!'k *-&DT)SqYt$4C^>Π6&A)#] p0]).d3=ViѤ+tPtz>5.8ZQqfp %Gj⢫CȤ8gp Jq+SוRB(,!xHWld+ŵ6])3J) +On'%! -lus6tG8}FesԸm>x=ڎXvHqwQٹ'XΌ`/ic8^J6&pS$B -CfhAq)2> E9Je+6$F)SWt$"oIW L6])nFWJוRFIt彅iR3_^zuuBB _57r('w8⫟.)GgOnVطV|']/s G~^Nٞ졗c9p梻ŋv@W^BN='N֕x$u/:Pm3$?޽,E'Xei9RQ ]~>]my'JtWZ{zU❗W!,ݳū|j02vUO 7_.ԱUNg_>kǪjv'zsOY>m~G?ņj)n>Sv(ٔ j2>vRJ_QW1#] 0f+ ٴ)u]E9<J)f+^U6u%hJg ѕ;#S0D7iƝTWiyc)٤ԺR}uuwl!Yo?bN_W=d+ŝtZhZJ(jރIg@%wQά4;˺V~|=tH̎ѴN6`4mLULE34FŒtJpJiO]WJEWsԕNAA7`.Z{=\WBIImWtT"JcFWEWJ;I єT1QWD:\.R@J)Z(^tT!#] 0Zy"9Y Eڄ{v)zbu7/.Kh߮#ĭ^-^׷'xPOgIܳ%in ]%kޝP?Զtv#5s,3z_ꋋ]"o6>+iy|$nnl {GWˤm oyuݞ߾qvǙEҫq?qcsJocus{ϭ+u]rbEŻZ~~U7_ #w'ʫTeBHTIy}eQ,8"J|kqL|i;g2LqJKi=׿~3EGoկT/ڒ?kClkDKqGj6vMCZO5jtVAϼ9{5Juvu߽o|j:p]PCk ;v}`5؛i&]k|CPB8#y]?vM`ԍأ ԝAo,Y ut(v`:_L⮋ dQI`>~+܀:tR K[B\oO@h\ߵ;iG06:LuK &}'MJf08׃dtzhjƚ~'\ڈd RT0]cч  W R'h(KCtk@L)̎W(nG Rk 4Im0h\o΢&I04dH'ɟA޵qdٿB.6m! X,AYlIN&BH9,EcPg-n E6OU::].\T{}2aٶ u * buQD2]vTMT*V8A1#3X2d]>(Ԝϛ$ͪ"hj*uoωTRmu ;Q椄,@Pc[%cKwSt-JI25mRhі8:BOk0u€"r V\jH) AJTmI'yRg)Ȃ>4Jt)*k_c37% >{(.I0kJ!B QȎўD }m.5ˎ0H%G-/VQO ָhs5># A2&d]g(M DBΦ$Xd*zbJe5͐jPoB+"X2nP(SP|k(P,Lh=4v#]7cEfJWukʃ A'b΂Gܬ ݄#Bl Sp|b{xPI!0ά :P%@ irPf̤cB8*VZ hM e*3[!(@Hq`Q g (Ez@ߑPIWm (ĩ(Hv\F*^IZ#YUDI)be+%:gd0[Qзz $$dAIh8(l,@Պp1v/UTy V"КeM236~ _hH+fD$' GcE( /UqG9% m-xP]Z01սu mz`-$ >:%@uPi>o:@GVZLAG=$]I4T*#d`&SQd4XQ|YΈ ʃVs I"9d^U0P>xmBV5 e8 xѼ<@= KH`Ge YLZ[]x $nGfp6*YȩՏDC}^jygE3ʃ d-Š.k Hb"ҧa!FڬVyo~gD>"CO&XV96]{ #BK|u)}NAј;ATHD%|uj З9t IճDTePEx(%g6CܖSBEk .IҎaƝ LܡBBkڝkHhgՌ6!XX-;l,j3 EȚB\`mBgVHQB*$?<(B`7qVUp*yXT**BȲ$P>flq)XNX)j^ 4+it֞Ew64IxTF+ j@eV o-JPti[ #$XHhf-w$a:J` ئQ}Ao#*e SLЍ![[&zvWv9`uZ mrnuT$KG&V P]!lJ''f= ]Z`-~"\;-,F ݚ{)DzI(ΓDCkBm1& P.V = ArV{R@k80)QC^":$5\Qr#bs"Y'%\IbLvP%(H Ш3) @)w!kQ͛ƮXB{߁yE!8ij)ڥA$\s)E bX0j`R;JQYTPcD&7cQGQ1abRuN` t@ QYjtTтΨ19MAk.YiFw -jYTAlRԞK=e2 b2 D hs@:'/{ikd !>z'PN'BojN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'u)$S: Hr:  |&tiΒ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nꔜ@N D7OL U; v=E'a'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v ;()9qW'(cd'St9H|N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'q^cyqӚRxs~C@Kku{~Q6L EaNɸڝq p?:K@-q;%`+F;]ʣsᘮNG|&S}q$Z/LW7|Y4Z~4)?$xBtEC<"1 ]T ҕr>=h} mϋ;xv!f/--ˊ7-\nK^X}!_[7_-6V DD.D ;c׻jl}M7ǹwu5&~^guֱ b[>[j]|e{~Zyʯ.FU>aXw'Z3Hoօ(juyy뤷_ φScEo>}>v1h`RtCL!Ne?{Ht,movЛawukp^^yMyvWhPzؕsJs윛 Prjj0Z!zkJE2:?(LG]ݬm Fn_nj6Ӌqx^u0)<*өRnԧR-{U |lk\1U;Vm8Αl?#͐UyhvlsD KFeC$GG3Fc4a8Bd(ܐdM)D=>_#xd4ZOEcZWB V1J(YA)D6ʓ++թUԖ ҕ8!`)NG]\%N LWO\TN+'hҏ%{tnyyNî,_..Ֆy-/>;nqiW!/Jd5q&g;d|RG mc.gCڥ=oeijlyaOOUӾozHeO>im4)]s IOt,d<)Cl^/wwo͍k,Mެ_6C'&Ԋ$/>: oξMw AWU;{?1ȷǛڂ5Sýy ?stܾ|Nqzv ߖzp-ON Vj0!!C$D7u7?ws;uwk\0o?wpۀҹ^ORjj' i:uɏWS|D}I|8 gm{^bb?.};&I3w:a3O77?{#Zkr\j%=8g)S)$+ѣ_t^o(ZwnFW?ǂ܁9jCH?7qo5\T XߌI8nbؿ+76H?7^!~^voڬ_>҇rjyݷo"/w\_t9|>ԭ^ӳ_ǿF#]|mR)#7 r9] 6cKp:djzjF:$ȺHz T5]m~|fZgFCbyL L _NB[TrĤBu"_RLtH؜. t̲Hmݬ0o3lp>L \cb9[zJTe@\W24ۧ iD{֯vu_Iaf2pv3;X`rb[r$2bk˶ܲe$V)v5*~UK+ɟi8{+ŁF}M>߾\7q狤H^\'4Mk.FϏ觟F̨}Q7nG`p12blkY&?Χ=:ztjN߿0k~ۿ/vH`%G6="ajuJy~v8_'yGs'QnMTdxx_]*wJ8?yPXKHGq 't>@u*xoTNj"vrtf-iI1Zyf+B/}j͋ FZM _7X#sAzkkcV'I@*K@u,3g{Iϓ\~{zQՁtJ;rE+lw}WE-#-B,΃%.`4% zZ$Y)AqzSд6GJN& rԍ3})?Āg\(.E]l| k=TSxuuj3b'mW:^,j?za7]7EBGRK@:5CiL,l$K}$vR.Ҙu4lCю"~;콯9ۭ( jIPkֿQRFZHǀK)xTY d#``byR"ky֖\&US*GY2lzsп4 }-WuҞ:!y~#}>+jT^[T`z.GP29Hʒq$UBi Al =+dQw)!)j¹ 6@5mKgL] 9evW~`bU.Yz$'K*%{-E uT,`P>Vzo  OђK6GiF-H\J12)'Rq%=^_iXoSI/fVfjL}7ǭNs55m0 g2́tjR4'7/4bUk?K9fzsLi04bl5kR o@nn+S6zuoLU4X>5X Qѩ芉$J2WPC/_&abHBQ)EdI5(=C! jz## };} em 7Zš&䇩i0Xvݹ9j/ѕIysOpDQM*4N 8#Ɠ3Q䤐WZDG0+ւ[E 2*0OZ?>i,(@Qn+ppvh|Ԙf>~O5ne9vR\=NƆKBRDVrbDo F[y(4˷i7Mw+8WP-|;7ϥ@J1݋)W5/z֙}/W~()1!0/73kחq>O_{0ȄYal15Qg䈡 2;e2\/-۪bf6XdE-/McӅ_(AQ*N),Q>ѯwPfYfGֲ1S,yPڪoGo)7|~sg'OgⅧcΆj鹻lU;*`$v\\D,V[~ -b@G(C 26m D@RP(HLU=spoWd^|^,wxp)ZCb0=! XxiA 4& leU(" 1'Zچ\X۔q@?q}@x@IĊw:PM&RCX5Z#%wF\!֊y]$z7.Fh8|w^Tڿw\q˨Q~^ Vsٽ'=A ҪY_THAf\Oƛ5\/KqT8\Cd-@=#eA` fƤ۸^R6o 3cA};Fjvt-#0§\HX0(J,01X UƬ YƂ)&f8Fxu1*_ݬ^t)o ]_N4Bb"PtB39{q5fȉ,8˖JC*4.PcDuイ}H)S7-L(vF"JlS:UPF#&X6BC0-2ϔNy:s=r!jǚwQ%5TOTUsZW fτWSuUh2uqFQ/H(js:TXQFF)[S:]+h:}¾Z0{&mvPaIA *P tFE*^JQV_JWE*Z E~Ģ ޛĤ6]U(=Ej`JMF4PnAh,9R3E=@ݻC4m+Z~Tb]E&ш%xa(ł@[C=Ɯ0E!4&`YdMǴ\-zΛbw7@C<ŚJ"d 4rJ<{ғ!)΂trض`.'yꒌ M01:$k mo!d {(?.ƴ@yCz_ 0%zdz([eii1Er pkZݖ#5Nh4kkKCށB?rϏOĠz7:WK||hP)$,M1QeYHj{~YMSNk)%O73VƻkQcC|69fQԳj1X6dOԗ<8Ztf3:'_fst78A*Q^HBP $E@]oGWwHÀ)upIY "B4r,ȡ4ưe*B\ weEa8Mc,.0Y)d6J0+ի=G ۩4Yտk]O7W\& *U&/z+}i0SZ>yW\ $5ޝ4r[{Ċhif\KdA9:"J-:=$JOrPc@H)eDs eW=`ꌏVR~!8j=˘>c2c@vCGs<7kWbo]*\>'T:<4QσtckLJEb0v":i an4轿)cD3a'OQ"lM>9'SؠU3Y 2 fZJތ?N:2Zf)ibGzyE΃Xߏ(ٰ̢./&l9}")j*j()㇂G_RЦ|"_WfZFqJQ@y4~^Ɲ'\VJBφ#i(3ڷQ6<̖4r%EJtii|/M>sptr[k SenW0 wQ^ZWP5$)ͧ:-o&vzi-Bn-Oi$ksթ?azΠtT:+Kosot^fKi" 9~𲭨>5PDZ_r93(62@i];KjHh< atƴD6#• ăe>yFBZ'^4oB㈙" I$XڔO3V3œEpTQ)ȱ B qZ dx[p ;S; $[A3DuJ q\ qI#iE"$%⑍.hI T]Dӿ|0aE" T)Jkx6&>N3}ar^۫G{ol؜ȓ(*%HP4T^VruPq)6)6R #1Q(ͩ1B+ DGpM7d>Z7 ( oW12#XPRM#cclGJcX EX;,<*Lo3>[/RpGt~ @fџ~dRFr{U#SgAIRiB8W 岠FÐ= JlR!`V0/EJGt`D/mnrSlGl;%5 jw vc$`"#hhL$ 47EpXh@SE,F0rU3АbZA ('8$#ȁR1 aclBOQI'}0"le"nDk;z+8CDZ(:`X'w(18H&bȴNe""(ԒQzHL8cu2)A94a#51q#W^\̫Kmu66KE0.;\ܘJe2JE¥q!(*h4AraѠILFNqBmfG[pϐGw1Wuq%cɎK0ez~qv2 ey*]]^h3/$/.2׿3Cч~ ףA3̎.#XnLaR&gQ\WRō0uQH&R+=`u :">7^*vv >(\hDCCL %ϊ^JvJRR3+$E* \%q@*IX*IyWJ^GZKfS9|;_R~ wYzaOvD{ o&iYhf&=B diҷ7EZ~B0I\ENAZPa:I};~>00Q ֌ \%qUVUR!\q̑'W 0&d*{:G IZz%8IiWVJspVd d*IE*II#W`OUS$-m2d\=K,Iaov760W'u1WKunY v牨’;quQPƎ<τ{َhY+VmK,k{<>U%['VJBWe{#AC16Y&C ΣF1ڤ,|)vъ)U&HAF"\c974B2Tؚ8wWi [bn U;µݜya/74;yI&͝?bПLrFce`cJ$ɦ$S22T㢉X'bWx*].K&mb$ AHQ`Sj5  q%ɬ#`cmK;ijw F/L0*%XHhdܴvR3de+@o"18L̰z($dQ 6j"QtUbj4޷s;F}Q2',/Mˈh:D/t Z&@"9"E)~T:h6; ;gq*o`V|N7^Ŭ(v]pŏt䍟=}: l`8"e{S ;]Io$-b.äR%JzQK$J+ֳzYgi۽oMF_͉+)+(s$ 4,s4t~:oov6au[tnɀ{4 ذHl|ޏC_3fovE8ۛm}wdn8,ؗ'e+)ԟ̤tУ8ŷG"EcW/4]t\H,d+/U*jˆݐxw:{|;]?h[Ҽ4;ۍwEa1F1dUJhq! K.opMS\ e&gF|#1Gd{_r"si)921RD+mN>;vY%sS*D![G)ec:Z\Bq3cެwڳLdÝqiE8f(מ>KL/LSQJ '؍+=~ElpI#14i K)Xv t Hg]E&@35ե YAm%MRB)!\,Ph2H Jڂ݃d{$?]4pɄ}; ͖zò_Vq]k4q$rQQəò0>RLēIf5&ȍщwS6)ԓܒ1P֐~<;86ZJ.たL4 >L8p =ʑXaX)Y\n3uH?[(2p8-s-{_#R!^ /ޙdKԳW#PEO4ct09w\ӣ)AF]>?\qRGT9`֞i~uovy ?egg'9^%gׅ _ s?y?>t}@BCm({}0HcK6tԌhl9hlfu%EL8UJk|ϋ\?O[[Vjӳ(:)БҰ8[Orl\7m^1憛jݬ? ǿG߽wo~7G| {﯏޾fPL1p$lA"0?_p][MS{;4-eO=𻴫|vЗS]O ӷI,y=>/Mڲ7|9k0]O;G]Gm6&.Vmzi9 _Zydu!#(ˀ}M2EJ8"J>p8&"^ZM,cuu2< A1>`o0*s@]మ3\& [6urݫ}ӫGFSd ~"q)m9V5 ZTɶcZOĴ+1ǴKcJiQΡ*H*gTZ:^u ]WV/jA:ZѪ[i3n5bxTTQ'TdR-v^^=P)M)i,ORcȕVLUV\t\!\je/`mzPڃ r3mLb0ʦGnAC3s YǕ$ۭm+=);l9bE[Jf,:g.Nsk\Vv qrvƎ^ˍG/B IZYUWZ2RS#G9zZr,X4B \03Jg,QSR EBE-o dcJ$SfV8JԽPH 0E)h&0q<ڧNTwdÚ6,yDӵOL/5[)&nb,v&1#%PdF!!1z3Kr(}%ij<%w`X]r6ξy")P?戹F\._9>O>Lӂ"7W\*C$3'لinwU\ܻrEG`/D$+yf1Ɍ)S/X˵&mcB0TMLwCi#sH؞Gl8c{b}SeTvl0+pBq .8$iDҰ»X8H8lSd!+tb|W94:εs{-:.!(lFύBE-=)U̷soY_Gin_^{.vK\M1|lWtXƀ-a{`RI v$ TemDd#WVY)ic\rgGȎT]m{;r/2׳,}%Ž`.(yfA[ 䳖ea9 pcnޅ\OJD̼ЏA?ܼ1x6]\w}_[e>7]\ OpN2 UH {2ÁՆK.;D "|` 0<0XP!rL* 2H&;z#Y<:xˤ #bc!.;qND=Kn}$jX3`Ru*I1\".O''/l=wz>JzAϡW^rM+Mf>iݔK`{.c~v^8Ly˪?#},TWPYI\7G#Z <S ^Z(&V7Lz]Og|_7 .Olӗϯ+q{U\OTg"ov]d6V|Suny0HhK,Z;eM S<2M\9w}w[:q6sկ<ٴ%4[ji\*hy&lC%:zRU'PQ$N_=wӍQX!Kޝz6o|/I+q]&TDh$yK/..lI[4qWb4ŵA E.,;r5q<^MO$߰:Y?IhTO#7S^h]wu'xURq>ocn/i}`ZIX]]#y{W]ײ̟[]>Fc#56ƟMR~#&FS#^tzg|?!+NH9DihBb儱'Tb|ȕC箄$@(< /]@\h U1RHp;m<8v(Cd(D.(CS0$@ሺ8T^{,(ʁpfIN}K[uWe73^E0z./}uvWI8) ˅2 F(epԞoA0BmVdr pZSo:Ӻ"ao 1kBt:\XZ{UT6b@7\/.~4A V0/x TRdق<{ŽH{JĹc:X~!VF3 N4^ob8ԇPg#>Tii5uTRLAUɋ%wUuw=ر<3#>B:Gaxt"E:RH5xbj'y`NE=R̰ihmFMgѢtlF J`RxFLѦSL^h\/k BoMAVk FէwqZ-2-b6$ja!=#?|BԈC԰~^#vd)OLOCDpk,s$m8: 8DXKd$uzݘ/OV2K|Ǯޜ{A!<3&G%TJ2ࣈIr%EɪЫZ& Du~cj4O.ɽKv-cZ2v巌]4$0e&.'[p7iOݤtX|,wգ-aUX>wU,!CwW(%Fp }@ <w,,O鮤a"L.Lu83}{R= ~}B bW/AED^R@ߟ?泖 8;&" O8^#?&lemԽZip ͰӜ@fdUֶ؛Nl<jd.5^-`t0.0vz 8Dx7.B8EQf*e'<On|e!jKg?m^wIY` +6AoffoczITמP4D䞢jܿz\lRSĮu@}m&(AsubIP{);)hYD{)Xm%P_# !~|^F3x^ -|FF:ŽD"8 ZB^E;d)`Zb*14W8RK!gI{a8 "-IZ;_4ȓdBfzDNz9U+P(I!FxzNG4H-^TLӋViEm㗁VONopׄ(dK_* NJ´rוVPDrZYU2Zp,%r"Ti@! LI{!$Z'-Y-11z ěoi(~Cx6 Jcq >+NcT'QL,R#b iM`Ql3F{H4LTɠm9MQ;iu ݭ Mq&>jee'*tdc#GifA[9{bИǽ+-AK-t-8IM,s dkF*#Uo,b1ޢ& hQQe5<2B X X j)# =)EWEq."`KB8%O5Zk;()O s3D]_CjKǏp,.$y9%z$?Q[kZڏ~ &z60!fSТ;i hxޱ/P ,z4" ՞ #H߹u usېqqAN 3E4;dfb 4)XTTuIF5C"SňHr.P:g(@,J4s'c拉s̝.{H f9,:={#1I)&LKe)nR 1X/ԁF.@Y]&2I@ǤjDoa1Q"k` W)D<b\p1Rs"8 87j|;o",ENl8;|1=y{ϕd|N.4Xw:dw+5+a z0{7o̹GA+4&j36H WJMRFS:R̥V)ɩRVD(q,tJ`U JniX TiXL[*|a1¾Pp-Q~yF=$n/'oOd249],pJ!Ę#Tcr0 ; cЂbY i0`Fs6T^3A!VDD0(r0X[Ĺc4lAc(^[^{D;''2%B1B}PU 42z*)璡 ?LFﴠ2"C E{,2%GQu!@<QY[f+RǾQjGܥL('x΂ґ3KI5U2e="|'pjT,e?0THVQyKs"j~qQw5bZ_4ww1PQ/TRA#&JL()Rr!uI,FѰ@%i_<_LR1hGpak믵X,fq5Al9v:iM-{~DE58Y\x0TYZɇN囥T0RTj Ett!\[m,(- #yYGL ުIgPsG>9g)={GJ?nQZazBRvY#wzvm!A`e9s8b#,Fd T!f.PGL1.5.6҂F0t_J/pM凳U@D欯)]M4S/?Nzp|zr[.й[0SbC":S#rc4Z$%`DޙӗT6spj-yBPPD653'0BR\"hn_ݕtFvg֭V;!˚ZmUD]5w4YYtp($ۿo82u6{*B ca+l'X)@3>UzFꖑv =PS(L^$IGɤd(N*Pv p4+i >OmҀ;0]%Z0&qq9BZNӬ"SNknl?ޅwazBoϿ45̯T6nE7|-ypḨQ"hϤ84@'ypANpt3ihQ$9;+@y.KhM@IɥF-ɍ`AᤲINOpѬKbu~Ǧ/ʐwSq*AҬJ-[JTpzjC_75@2MK8+0|rR2uQ7GT?W.5Zg7QDX 1^[nkb 6rVY_bqiB2֑\?y0ukE050bLq%`q$ =]98'G%2y$Fm+QhXOs%lƓW.|Hh1-7tʨЩ=l0/apq/W|-~~巯O(3'/ ,8H`$|?=1?_tmWCx[ yn|qUSn&0dJZ1OoӺ,< z[f–1"οdai.*N$d1xF N*rPW \IAhqԂhoy_/:'pZZk}kWk+n7i=ҲZ2H1IBQ&QKξť$0IQc2d u|QVaUnU?qԔŏgZR= b<?:\pMK7uIR Gɉ`UɌpėyV 0v^gY!OBfV* Ib;K6:(K<.me@#]% @J6 e`$PO-P 4{sL+Mxf?댜۟]B_@)mx9y1=x{r }rGLŸwnCBB<IPYÚl`E)T^ Q:pz@m'FKifNzu@:S)|`Dok+&I>1,p|PcդdfoM)zATDY&1Z H]hʝ^iT3r:v(R̈́Z Dc2s䉗D6Ifz+lWn]##]q:)?_N'IPT~rkttB}1k]gu)%$!}J1^rB:EyЦx `ƏhD~\D13ɿ{롺 P0q™91?U  헋jtP()F Ƃlu8@Ӕ,@PrveYm'.WJഀ&]T9܂=nu-ْp!캻XK"76iyP5/dnh]Bjغ[os4E[ լ:%ܾM{e;o lJa2oo~us ryۅ,z[:nhؚsm…[s>ymÒw mʫmd3'x=)P⺷pP]g>o${~{8WբqR)AĤdz(KsK߻2SbLltI&T:z8cYbc)dDN-q"=}x1|h9j%4ͮ[0>]6+81p4 3ZQ?N5t1ܕQؑ-97s6  pgM gɥĒnmRΪd)U:$+3f&0Z3&*:ȹ/Q}mi2מ-}ݚ--:EJfD%7gTX VGr|.ՑZi:RL_X+,.*/W @TG ?{62+Ja,DK(s:}6{9q?ϲ"Z|6JSs8bR^ r8\.+9IPB!|աO6c8-n5M:bnݠ2 Vl95}6C̆+ry1KIFdB@k.*&m3>A$j{  FHsw8'1$h<6)ɕwpi\N"sb:ei{0K-c3%\S)β 쌜-j迀v6v5UZXN5lC|FJLF?w{d#%Mh3فNE*N$ , WJO7g<^^|i,Qu k^^a4iev{OɃp݅!kٴz8|b>zD{kjL:'wqt.Rr>tG[~tyqzxzYWRo~ . a4_׳4IGo?"T]'XxQ+Z=4,RhKK*6(yHx3 G7d^ѫ Z.y7G>||1c*J0Z' 2&z>;w-Ut>|K`T ƿgMw"J_x*vCH33Ճ3, Tnz-6GV/9`*LI??s_$y-m4qӳLJ|q}1>0M{ܴQIW]woɆqu_x 1: j>0IMzyN*IA6=3IUeyw;ݚ_V HDޢEYəmjI$CLH%13$+V\JTNDLt,Rc2Ӊ}gD$qZ._621JBr&@B^MRCVX̡z=܁!{USAP/_0½U8y5E 3?ܴ' (vAݠ:wrv;&.g"-m nYp^GH߃F ? v=콫{wS-QUVYA40/*\\{Α$5< ``|ԥD# '763sTLLZ ǜ-c9[-h [1#%PR=JmyĚ^?Ćxi6+j9[Qfpa<ŤL`2a[!?*:mg "14X C`]ps: !Jy`.+tô4sWp94 =0$Ka-"mqZ³ 3AgPAsL=<־OZ3~[(91LɜzTF؝0׀mH$s΂zHpÿNSj$3egd'3%74S.yq wMgq3a sJ3ӽazxU6Rr9`zsD.3#K JHadҜ> .v2%>z{Ϭw1y˼ #aB?{Ƒ[GGwprfKA00E*$%?W5|E[CR9IHtg{m ĭ4 {b;Mɲ*&&W&{hGKhd@o;5Qj9A٨ٛ${}_h\OLc54XP# S7E_4;2r쬣YG7=bݪ{דO=GR eTKQ3aT쌣8:q,id^8ogEβ$bRh@7'm?sҐ ~F.|g{'|6T0-zsjM2TiN` Q-zdV'.λλ޾fQi0 ss~0sˊcv́$S4dGGU *y8T +* c;GgGgx kUO/Vpk{!}j%{F9T!hm2'p tE3r*9kd7jMSB$}wyozӨ7IDbO{gCهTen4~?0L?o%;$T4F[YdBm5o~'= d؎^?Ot C܁U^)0o*߰mݶ{H;vV<|Q* 5@(NF{e*ӦLU%鬼YPvk(t۾6=F[iM>LUbJU':;KŭuɈ袄Z LueU tan^S>a_`) ;_Vk&31hPƐTVsl*M7goVjc꾃|D)ugzL_[:@+0а]O܄%@_)'uTg ^@'?SB!O~-+?_U(b<`գ i+djv+>W/M-URݨZ]tؗ[rV.?kg?w O-6PGqKHFʚ8y\!˳eXax{U~u`7xU.\`YQ˽y4ϙ]MF9|xދ^/IͻW!(#$W^N{{W5ry[3=uգkM :nsU"Fv:B\W$lpr`Ԃi;HW,#\`5XW֩ UJW" `!H.dc]Zێ+RYWULIqL=t\'Fjlv  :\m[\)DFBd6" r5"Vv:B\ VJԳtUT+R+yqE*w:F\&͛jmC/ֻ=qєEƷ~vE*^Wրc&#\o&\peb"uurH2 gz,U*\ZP;uJ=)m?8 =Tf*UpJuڶ깕LɌpEW(&e+R+"kS:\Jiz,iz}:kX6ΎmPl˜iW.'[{oZXe V2Wxc~}oQX˂w%Xӏ@/5+HmAR1&d+QlFIm t+קD~L>t쪙'yW>v*MjRYWtm3%W$\6"*\C}/LWՂP{#Zc]\jv\Jc:\# ں> /qWX`r%7~rKa; Qa6LZ!ڎiR);1 \`l*IƪDTWLj+Aɜ`|u P歷HWق :\Zڎ+R:\# w@F2|FDI37SkZ+Ri!cĕRpH$&j)F*ZWǃ+'R.#\FQf7#"RvnpeX '24~frՁG5U3m4pmsɵQ lpErWBv:B\  _CghֶWr-Qy#•!I34_j|Z mX[IQFUbnDiLhJaڦY vmk4jָ\~kLfVZ5i\f<Û$7 V"6i1֙ Urf:\!4hrCd+ϴR]qE*M7qeY2 v 'MHmNY6JA*+н%"^%|p uҖKI(g>+|;bJ{`9(^9FZ۸MՉeϰA,6M,AÖޡܳ>j4B;f}+皯ëz fZ]箽GJ >yF骔rw.4kuZq 伪Ɛʄ N+uVA:1P22F]yP{ [˃3ei h\1bu1PTFk &T﹃MNRpAi Ҫ"R teqɖ&RiK=(`TULѱ Ȯ]NAqaN΀_)N==lJ W7נ>Goa5IXE6iֻ.,a]ѬW 9eg> BknK`N|'&ٓVrh1Kia-XBY Ϊ= 'ת WZ[ӥ >i5(*f0]`ٽ 5$<4׬F>N ;'[>&rD& zOJ@=P1睮'V02^}f|)<0:%OsQ3e *SJo}vH `c=N<nF+z:NZM?Д+C)g]8=[|UGSW\ }fX}5!hV^ < $@~7yq_4;r윹8sVu:U)gn-'{0R@"_ 4G,4goɩGqtXY0Ig1ɼpx (])ev?{Hrܿ Nh=NIm.Ҙ>Oifw] ?.P$P "~2+J'&sR!9ZWI\:{"+J ݴFl9d o bY9[mo<}#-8eߜpF{J"7_5>`gimFC}1#~/>[|VvmU.;/Ogs `$ x R;R^cWAe WF[.J,e̽F8F(b^1(K.ΤIxyIdLWwhiyR>DJ#0G;q%l<3Blb~4ZA#vO蠈>mVxxvI Ȟ0zJ&:ұA#jp.k&@؄LgW= Yt<G2xt[G%/vƥ,Lt$ sjReȎH[a-:dp(`hQE-!'R %^x5A3iُ, ߫ԍ{luy?s77 -݉Nv:৲ ۗ^,ŶuSǂ1u,Mɻ/ S /8NZ(A6:8oL8L\∄_dŴGǕ~OAG??DrUNY7\iQ&{RomgR/~ R"jqq2Nv1l2JfYJ場k>ďC'Q($xx_v:w1e7\.8y߿HC3_?5ed4Ů+ c0wƲ0ǖ;Phk̆;/ 'IakEvQI#HՊORw}zѹu K,]AfYC&AdEL]| IlÉmSU 9]bFȱYW ?ýuynmg]A)/h!X`Ki0Z5* F:%Bm:'&qkPkHE=VwJ.XW%|DŻtw5aA R5X`&ʨwe2v$ը,PXòV5 ޟA1 0Yɽv*WY@0 2d *4ޤG.Rk.>HSdYk,=]VdR:8rY'LG.B_Ir#iS!^"-=|鸄>3~~?:lO(\hoR΀>̒ꆟg9fpV8JYaրuېl̒'澴0^f>%ߙvG@at,gYMp>o!rcJ4ȀkAvZNjМec,WƍsB0НA,!u5[}%߂%&1}r;UPL\cE&6*&R`@e"d,җ 3$\ԔDpAI&&%@:#T#1!H/)#)I24T\;v:(\2:JYXꍊ|̥I!IPz2 #906!@&–HLD֣{踨|\̾qɘeLrdK[\s,ђ5qE(֛El({;"P"nY]~Fh˯ m7gP|&wr#vpacZa*R}Hkt'PQeH)xMCwnJ1Px. +K-L0qϳdWAh4r؇ -`W_X8|G緱nml{LHKkgtHkeimQC3^rդ"FIgj:9A]K^Jx1}(U{a >q 5ᨎTve[?b^mK6j4g'3M9igؠGѰL?1j# AvУ}gs c,d/^"(Yc ̂M!j# L*Va#P%Gz=;oi獯ʬ}_3VL{zlM>}P?Sto kT<;Q5(DݠP>&iw :0@YȳDq%"tcIDx -`v2z組6so9 E* 0Ͳ$4a!_Mɧdt"*.1J%.22G3ASFcQ9/uP%Q'&wDD,PJ|YfsVDoX B隌ᚊ[-G+rU}8B-oJxHB.Ιet]oZyv歎{vs]{k5zgɎ֟z~ϓB[hZ_~uīWU\lw_Cݜꮍc뻞 |:}ь'{=mi|AX^T /Js j1>.ÏZ<\YsP|p^HAp\e FCYhIDyy^ uGԍyfʧeT2#/N1<pO 3^Svvzqq;zCx#v2&[6MDM6e &92 &xcQ$񸤬f,`{!EMM5D#0JY0ȉ`c>Д8#⊉h j;{on&z0*FỶ,4Љ@ di)p&fȞȓSCQ<̊@21CdE aML,h #a}|*@Fu250F |ƶ bcc[D4 #CfБ NkLAvX_$ɀl&cH!cI'cֆh@3>`I͍L6 썉0"?O4Ԟpq<lY-.bøh;\pXC9-q,JT&ё%HfR2HLV$pqBz\l J,AZnQ20bL0XEXRu&isip_*4HӴRu9Ѱnt z@ȆAZ|7 /rwxBiPlc˴?8li#EsٽY`ԉjZB,T%n@KXxJ$s'jDH*69.UȂk,D%iɕ(55̸D'p+o6潧vo?~A~^ 4sɄ7KE?N] ˆ׽AqmOY뤐)Q"$}g›rSQ g0v5̯>L7-@)]xlKH8斌┡+v|f\c7Kx<^>Vk[͒Y9s]* tꥸ#uE8}2`R\^CRsAz_{Nޤ+Wɬg=Fgϖ-*jPI|6^9]MS(h(.T=sh^?k7տW~/x=9}bF3@}o%.vABSm<8wE90M"k7]{XDX1ƨRRdVc8wx΋ˋql[F{뺶(u\i#Ŋ?P}=&ʽ >Ǽ7zo=TpsC5͉L/9Mǿӟ_/`zL3zdzk$4=";V) TU?.4||-v V ҫO[fSSꭅ^ *CoCޜW;C_Hz 3Bs%10l Fv#R(JS *X+J؜i 71[,QV9A)OT%# *y974ʪnw卲̇sd#;p\ Z(vBε $fi\ \q \im;\); %cWE`F \ii;\)gJ3!!3]q ;"i~H)tW \WE`WE\kLm"em•b%޹'wJ8PWL zK ?)327jC0J:3 Uȥ B,qo;w[*%[0n zutjROȍϋ>Å73$h9S^>?4ۏ@\Pm/߿:F/SFC *y+j?Kڡy]EwUGoK:=% N]h(m8.P9t-yUJg6$]IZ6\uiפ H|WBS)<e;ZAK,n2LR'@ ]z҆/дڞCHZ%9u( :z+dI T(+uF=4Eyky@M,c<E2m6`"-aiL/U.qKE82Gn0YgL`2;mѠ@codP9L 蘥FS}((M3u@iڝ]L2+O؋*, q%K<Ĝ38DZX`IdGd^E&xUfMѤ- mK5$[ %}.ӤRsA&%%X˵&mv"`271fADUU%$2 tp 2~:0[Xa1q6y*2?|ly$'Ww3!n7nÞ繓rד=+1IU0Yg$b& L[8 Z6泒JGYM@'uB5.Ip6 :>qm6'ǍB-)U5ɳ= .nnq`kx'Z+ey]YOcY"́ʥFy8Ԋp0瓋Z|rrlcw>9\6M96I Ltn=T^n`,EOL\j$ión'~y>~9>>h͇Qq'zYyZ{m9޵q$2w!ٻ10QmL ~W=3ǐՔhyDf=5=Uտ xكF$7!|ͦtL>=V1bȏոƁO`;~4al ÞLJsRsJ0!?QoD.Gkr7h#i;6àA1!}.2.90g*[P+ @TCDY4:" KY6~#E5n8b4]`ZEx.01r_Ѓ4LrhDS PщDX$JI؄ qP=gk u# <0E Y3#L.rOyh ]g|nR/Ys|R=k5x8 I4'+ hQֲAI!;"o2!]D F~780,J(,ݠSz> al1}ݷeu-Z?3w9.݉喟M L?/MM?ڸ<׿0WSW)(V/_VD˵׆^ mimR&ŋ$6Y'Y+}?=Q%=goIɟ&;ySƌi)h5Usu*Gk!,r(/X]VDfOǰTimSm"!m.WzRk;wyyt[3A|q3L,P7j7{vLc턷1+y&fLcy˅V^w杲 n&zp, &ؾܦ0Ae@SH%1ɣ I)G5h*ͥIbc "HG j^0hQ]߀9Xir<6b D@BBYIBPһKbWy~u^qc#Wb(5]EjEoaBST |q?Ent~gdq gOwntn.]g/d}ڿ'=}M ^Sy=deކq䚥7Pc*;zu˶٥6{nӰBDg"R$b-Z1.EИnhLw!I%)xfD8gM>[yC1bj&uWҸW/a^Ga[x- uvEWCJƻ??u, с3sDg ke%NЉsD1Lp';r%24˶A%F'( D#^rrm4DJR@+i%Y4۠"g7^\n͏_oǍAQ; XIN3ww:~LOUKIT1:-QH"(g>hHMXHj_**+Dɛ?EȅƧ&76 }h}ԖhF,,+ȹeu2HH pT :2.\H>׉eQ8pXCRl78Åʊcq S1j H/x 9$Tr{Qвk6/vG3WYӊ VFM?\^T|=V9eiWΉ_̊yj>G3fS Aq7M34z݀7h} v'E~EkAFm_4d"c!?ȶP.%W*p6u疱Q=-ekvnnX}J"۹ệO͛ﶃkB?5Z+S[L?diظ3%oBer ]ҝ=1E˯]<ޠMFHmm-NW3]oZy济;vsm{m%zffVgwY< v$jꎉۍnJn5kn5Y4ͩn8܄OP^u!" /ŀ{\l7D뉜g/p.2"R>*܌4XDoѵGފNhd jњ@3P)O 7\%. "QGHpwwm7{T15!<Գ_vOf 6iT*;V U&RRhO)SDh[颂 -,#VyP $͟"8hQH4#2z 9͚0w:~k\mXmvwWVT+.#sM^{-`Z{@\kcb.)c%wNYC(IʾCTF$7MRh_T1T=L0iEѤ$fPI֌ȹEbX.,BUXAT^zXfdDZ=yK<,on<~'Ad0!ƨUhSjPrH=du^H( ,E-s-r3H5c)7mš|YO0'_?AzUM=1^}&A |h!9E$_Z\0B$ؚtx T3fUt X1-湹5^ xq?Vб% 8\X Z%ޡL#r~sMA>˩<: 3F~,;*\j` >79Z O ,Vy@ST  >^/I_gAwfg6@۩0Ϥ " s/kir !;W9 HBRI'AbݖD*'}AUz/Z6m3OmPm23a"BhX@`#l"Tc ʙA*R"X`h@zC^J1 e moP=ѲضIk3,iW9i*oO6ph]ѦR~DY(2吘Εd)%-єE qF+YsS+N_TapUP[lH%sS* rpђ@h8x*Ugb.Aܷdž> 4{Sw< zcqLdíjEn,3z˵L/?L[Qб}= ?ִBrzQ'BhVX葰--ڂS"5>U#BǗW3jN;ԥ Y@F'PJH%Wt֤3&+q7|yݽy/ϻ L x߻V|HsS~%4N$Iz3ۏwc,҅PM^55̯46Lo_w-%uzBB$ l56%Y9s]* `ZO.G$7^ӄYc~8+\B>bv~ђ=ύc]FffY{$zz2Q+ E9 ꓰ]uCS(\ m>]^q^9gK(pcݛ^k1S{zWoV]b4i ?̃J Icy~sw؆Hl .6u#6vc-lfYBi8jdT?K?]/_'Ѳ 7i$Wza=HXs4髚WK)?~GQ-j_˽п<'v/?=?~Ͼxo=:P`o"A@еmko޵DءiUF=]myCw([$W_Pa=ͩG9lYSTA0"ŴYY UMbl9"6" CZ*A+[]~$7}z2yu U0C `ǨXF "+Lӓ0yUza?~ydS&N9zc{0<`Fs)渏h}B]uZ9שP[㉽↋{BqËtwn 8UγprIq"="\<"DЋ*WJwE_Nq!8a#W੉(U!cWD-gUN\DqeK]Ra3Vf׽7+hA 3XdXߘO_7MFMi8d# II<DcY>BG#\dXtġBNL@1 BkH\$F\rѠJ ~P);%%+%~DƏF\r9qUUd'^R uL&F\rшBcsUҰN\@qeŎH\**J8tqUE+`8s&CwuaK$J9nT|W\FXcx\H6*k#*@*^VJb m>? 9'ԁhFf]~O ņi߽zNʠIRA4@J\ĉGM,# @V/%D|(+!M=Nwnfw9^{uwR?Fw.ap(U0c|n5o6z&. w.'~-/x.g=dF؈[qj[zUY^EWѻ3KLYg=(?vY{B{jgB2Y0R 3cp)yI(+yTb6#Ъw|`Ffuz%{.a150̖$^;!qEBBtfd&v ni.3\i(5hk.M>O$ lUⰞ-+sPbc'Oc߂U,C _4Pe'HdJMT`5ဌpFI}ℙ#U?ɝ8JZD2`IitFR%rAZG-+M&@"b—l^m̢eHk܏,IBW7$!uA?y//Fu{'$MAgRNWUӝ/wUu:ietUu:tUuUtUu:]UNWUUtUu:]UNWU;tUu:@YGC9!Ux,9 9Jd]ʻC jpľ0wS5H&՗hE+n^URJ_ʺ+uT .ece "}bUWf]uﷵxf:owrww`g`\V,'eE"rĸYd@_[5(cQG<79~fg7T".:h$zrY诫AlzQ< {$cj^4ɍgo%^:8}nP 묠nf慨rwOnn6 ]߹n`J4 LYj Z-m.fsll;C-WK <ҚYҠ$&AnthIg_ $5 (JVSc[cNjZH:Xu/4eZσ#QR=2Z!</' h^ҸP?9VW%X;/*_`;5W\>=(T#k?f = YXIQEF2UrfФI&8.H+GnKC>;8jv⯳9IU|>0)8|;5t/o Z v癠e۷&]\ W;bsy tx*/*fvRs):tpXYO5dOZG:)/NI ܸl2wȊa=DhX",|ФR* d۔.} ԕbD̑LT:6^`kJ<;$Q%:iDF,!yKB q{(=ؕo6̮Le5c°0~{{@Le6%S7n2WUdi`OӲ95q)^]~θ6fc*tT+XJUCZ&dBKU:DqlM_73~TٴD.F`SuM`.SBoRcZB{5=SqR_p¹LJ>"kԯ~VE /q 9 =ge ٭Ja'ꄐ ΋̜ N#(w5dj'mF-G|*Q[,o5N׍{Cϰ_&[72x}7oַW_דtQOYr}Kc+ήLV-StEti#rƶYᦻ+޽ya}O{gijƭ7mzﴼg.;rv~F[ڛ_.zOӍ>5֨D7.Xs>FQon' M|=g&mYڞf0%|AAL#"x"< a6#0$Iqe;ÂNhi *rn$iCb6#t&mh4XpAY'8#}.f`DM 5V@!LAd_19xHi!D9޵q$𧻃'~0].SBRrE%RC"9S]UzqҞ,M~S(PO*zBts=IjpJ M4kMQj+e-htee52:ǘgF:L$zR0Hz]>:E8^V&%U[3V#gfUj.TuuAui wI_|zz3 t4n8Og\c'ahBQGȦ w{wPt{ѲꬵfOm0rD9z+ iK5$5T*}*Wxo^RgB9FH3?X\~XRe\Z;(˒D< up\R m BR0i&QVjLBdAy1bXGcJy`dR5Ƞ#mkhT#Үp3^XNJXsRJmqZIeZ{ A !%V1xa"BvhI|i۳N~yF93@C2pB/hJ( 1ܻB4,@ko"dk1^QŹQ3~qF=8涹gzܱshuu)/9-N=eu!;l|M7so[͏nLSqNOOnDe8|RqjF.8?Ө8,3ܧQK-2ӌ䢥L1 Sp>RvZ!៭oEgAO}rNk$^HtYf ֥bt1\A&4i)d%LՍN:Xɩz4Hڹ]Nί;"~ew{Nu3qW%tr8O6\4 NIl $ߥy 㬲,AlMr=V%˹qIVg>qM&'ǵB*V}5rvKZ tϛ|oXsRlWeM˒?в0? Q[G|J7 qe.JK2'OnJ ck tu~oqut&쮆^ mmi.^ayYk>681ލ"h̍.{O3m4YwyA%6'|v!K{x~ 6 Ɨt [DjWQ/-7[NiC«ỂaBWbo-zSHwM^,{\;t[E 3%Xwv`:Ss9$w E٬~w6ýf=&)OH땰aQ64|hL|vw,t&fޢCwA ӽ{`{GN]ʒa_=5zj3mx-!~G Z2=N"˪4%J> ≙D*c=Rq>=!"*U8(n MI@T603s d=z`;$熣VYJfW80M=NxcNMO/qS.K;3V_B 9z )ѭwtݞ|sA9!6e /cؘY÷O$9]ۗWw cwntyF/7qf:OweFҙay7J)z~ܥ <ـ`vpWl/˪]}vǵCے 5bm\])5FHȖ'4G \ .>1uM rxC a?i97ϑuٱU |g,Ȃ7MN&8 cD JU(l⥜ P:K9EHRa$y$heI|Zhs6:C](-`&0nUąH1ԈJM>ЅhAikR{Ғ;hJ@j#Hhٻ6,WcmڮC@$@v_f i Q('{.)QeQjKl:M,Q[]O͹x iUE8USI{k%xN4dlc'B eNJ}ύ:־U2)pBU>) X )$Z4XLզlR\=D@QXOI=i^,>Y0F.,N5)VnJ}Q$\` `ґBv=5ڐ]jy;aJ$|S(_ qр)k|FnxJLv **6(:٠-;u9Vs+`aJJԤp<ĪuUJ|XZ4µ5ׄBGWX,ĕd Hqah VCFzzxK05n0\F#)Mٰ֞F*ZD@ %X_;wMBA[U&J)1N-` t[ ҀRS̨l`WP&4:v xqJSIRȠUtdWVb@,AnTzCZq2F[@( `5ePDdBELn;clQȌ<BGg*ƒ5 q.0 ֎AC\)L1R"%8TR 3k'T b[9(i3XGfұK!\+B4ަ2ÙPHq ,s$`G td (Ez@ߑPI@PSQzr{FjuO J H/[ܸ4V^n!GAU3_C'%R,"pPBi5%Y,  ^ B9TE54.dPgmLsBwv3q)tǬiNT1&PT1yQHa9IBM`߻Y;L|c('WdϹyLϜכZp v`c{x]!0p:#4 /Fi]t4CҕjIJ2Ba1%OpHv9ŗ,J茸pԉ5'HiD.(2ӪAU &deZCP`Mtx L}$kIu <ov 7|.JrA#+PZDbYьmC5YK1Z1ةq*BNv~:]~ATHD%|uj З9tLIճD(ʠvPJN%l-ѧYjW+y r.!z-bB bjw !T3ڄ`1cL'!}Ƣ7tD"tS:ft]q6 3)I@d(nfWbd@BUq@ơ"2ΪU%CIeXYvnjpM"β] [%VmZb4+#liVԀʬ$޲[6RSң*x/GT^Q MZ6H*tM>%LU)[[&z̃vw7ͫS.C{5^.ۆsU&j,`c;V$LC'Kac+i0CguwG(x$JHu\mT=kM!JKBq$RVO ]j31i'rt0a#[N*3bO*Vj7,J=!ɡmM1WdsC<܈XܢzE,WRX SAA ,3 R 4Č,mAz |"2PAz;ZT}f=*-+P!>]D4cҠNn B`}?M bZ0j`R'JQYTPcD&7cQGQ1aaRuJ` @ QYj0Q3j hNmS; Vnj 4f=ؤ=Cɗ wdd, 6#T tsr|dރ v[Fj-z5FC-*mP FxqVp*-]QZش =W&Eυiq#f0Q%#pzul) ݬnHˡ옅jIr)x* iPb驓H_PFU~B:COW4"坩y: w,0H5寷zFh)r7^n7$:E>/U[R^11nuZ۵_}ߪ M(Lu=Tuafjo6F6߷ǃwԷ+ZctDn>g$nԳ9#Q)-ޟxF"ćHDQ$& %^8tE˷'~sbx(|rh`=ϞzKgNa6OochE[R~jL kwi `} ^:j6ߵ??zs8vRm!a^n-~ۖK̲'ubocW܇;6yuyrċM?մn4➮we}G] _]3 iBWgeV~Vnߎ~Ј}e#!&N1,z- a(cbNfS >2G ߮A]v;d5\'Z5j]<4Bnr̩g鄆z_q԰*USv]pazܪq3֪z㵪 uYt~f{ܜL7'nQܰq2nX ˸a7,eܰq2nX ˸a7,eܰq2nX ˸a7,eܰq2nX ˸a7,eܰq2nX ˸a7,eܰq2nX ˸a6,CQrN Ψa>iC5,ZaP aهN ` N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:\'5''F?'Hq{JNt?oc'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vuэZ?''[3':='L ONCt@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; t8NCVvkW,^^VSzC?uz^-/aA̜K|K@:{PJ6.q郩(]'0.NEW׫ nPFtut-Nں_偞 #fmߕ]_!D̈́A {dЋxw,O//EaYARJGDAz ՅU/.ҲwwvJ Z֗u%E8]b߾-o z,QvW7+o[u (U_ʹ/~y0nɴ2#%F̆f Ms`FtEO ]\cBWn3]"])s3+v̆s+Bk{^:g ՠB##{$KVu>ѴBZ9\e\hhu{U {׏iIhh-c]+fSSW=LWHWV{a Ol57":;]JtutPqNXk5"Fͅ{uE(=!ҕ7Zj9#"^̆Vp\ƽ+:G(`(;9fw>^áhU1.G]E:7p6O qup$WCWS_ y~.xf5|}7ݻjtw+tЩ=w`gCW7!4LWHWʆOWF!l>Wr7:;]}t4tG9X"VOPॐ;\w>k<`v]qQ瓬@k>yf6'VٜѶSn!;#uvdÃ'yF4^a})'j'|j[o10ruVI_xFxse'bH'|0'K:ey{??J23;WuJK 42CNY kvEdu }];ҿP9զ\WQ9GnBb)Ka:9ձp쇷pˀGzur9q y*a}$Ze g{۸W۱~(vsm-hX` 0ҍ,9ɉÙH_`B}8fO꿊kchFQz/ τ(Kp.A l9?O8`47N}Q<]P$.=-#d5IϣNrvI'4xDћEdFi$IjdBoxC-@r6e@Y"1Z$H]Bhʝ^iT3rn!jZar99d9Oӎ%*P6jIQ/ !D 6$m@&4@5)zA߆]]6߄\-~0 㫛7W $4(PƬ07P91О8UgpZ9ջ`cҺ0B KH*|J1tBzQ-~C0%5RbzKì(\A?j&[ PHgw:!9$4"} ADLd TTD  bQRG a3D6://!:PqBti UgW1vY= j䝞X;r#r +lrs528aY~v;oUr\uZQo6WޔSzunbU/U֞U6&r+iEw+\vkWk}oPF?챵;XUs.zQEpාfU!,dpn;5/ ﮼I+٣敒a2oo-Wv<*L&M;*nuGubyEW`2z/z2C_wzf>N?(C-;ʜQ(]4UZ>-4u_P{gKCX Wneax[ov18)D6j WisnXz,.g; JjR1i?8F+c HK8IH< R)jњ@3P)O GH$D!nޢhQ{{cˠ;FƽIkW9 ܦhV 2B;}rPLYW%NTPAkU*dȄ%I"8hQH4#2z  jgTx7vYemF;㓵H=3Rv+r|y5'92l]gLҩVf7&zTւBjr9e E$v QbLޓlпFbz `$`=II8 QQ+Z3vFnVLvBձ.']pm5g:|3I=Mò}rkl:"L1h#LB۠2& +8-E6 q&W:(ᕈ٨+qblb‚H2My][c(G`cꤵOvk\Bxi#$&@M( ZX.sSVAj@OM?,'0gy0I7 R^tQD A1`&(1H:ՠXvFn}X9kLb<uQ4I#nx"eQWRE E6F/dՈrK d>?)MyԂg|t&LI3Tc95>T{ԋ|azX%(SzfGُ)AsnP"&:&B. QF" l!ӅWAQK-w:;>R> Gw<)wPkzt 0nEAd(NNES!'8(t c6R"7>R=5`UqQ8!pJF *]@7z+8ifzy5l+YoՉyܤԆ5|I9AOt@r;z]0]1'DxUpc~a>%`V|s~>}:+{~o6C#6R%ĻP\MƟn޴ϏJ=`ks|P yTJp"?sr2Xv@M$ AsV}M>ˡ~HVpT_tBRV7?]邏5s &-UdG-[&!Jgb|Yrz|i߅#Ʌy"FF´L´Ly6ej: ӐJz´?̅i\1ʻ_!larcQsC-h+baq Q-?^4;CCQB"Aƺ3~ODZ#1tߙVEgQD/Hh@(dL,.!8t](4LjME"Npolg<"0i5q)ZN댜=C mz4{ rPceJ@#_,xMl9 0/l8~w2-k5E{%Z7aĊC @u. G1:o #AA4(pV7n˥lb`ͭ֒ t1& N$cQ!X)3}Qw_y*-j70O'$kº׮=N’V[fD~זh^rQ<\v(N$;q2M{ u6d# Y&Xd7PqJD"P|IC(J)gT,/2Ҡ:J&%Dq*U1oTf؂xzc?G^DwgH=? cb`F9#`b4`IzPw}.rrceuOwC 7U"; <|Qv9L]8CH9{甇3Wp7g mѼm'R@!֤DL.5jIno3hZ݋J՛c"+97+W)fpdWie^^K%s>gaRM] (>V%8G7>5{u?Va_7?T/^̦Upv b.ݧ( rە ٴZ({wa %[rsKmͰfͬ2+q"f>`9j,im;j[_Q)Zu@HnX"}< R//\3 wVʨRBW]`TUϿ_~W/_xWo)3o_˷~xV8ShH`$0MۮMs#h^̀Ӯ-S9qӋQnP||bH׆VθjF (Ƞ셦ߦJp+bC:-3o Z!fMOb{IrJ)2d I!Qd8usg$\'?K*1]kdtHЀ0 MIFotG+P>B&i O:0]1A.9XwP⡫;GOjܚkx9p:e>M֚Swx;j܊|ŧhY'y<^ m)8Ϊº Fu 1ojC޻cm:x#-:d_ΌCFEύ4ᣱA(qZ0WFEL16@yO Ȭ_x&Ⱥ=7ya<J{ >{!B@zq+-6mS_ͿFG㏣Di}&lw",> pZzZC㮱=!'BZkEi51)PB j:]6jyj)` s,N@nE/#'XuU;a?6*Y8|f8Ŕ @*Ae |uB_-JـnΘB" ^ ));a`Ѓ=g[ELk ׄ ib5\y/xHz@:.:ݶq+c:w~b%/|Nj.St82uvۭ"VxBGOT>:ZOpI 0D!8 ϙ+wpV8:G JxȞqaTIJChW> nZ$BHS"Ƽ. ]Lj(KAGD1p.QgG'cP4 E1pO1UaCdZySKc,}MfS!j!vBF! Nd[Nۉ6+qhTNCEvrpP욗ߞ'o5Ftλ }D,7d~ׁKql$?̌<ݞOLjImFQ%,Ub >(2Y*3eDfI6eqmUHv\t\P"ىrE!0c z Hkڤ-^X3s&{ˡ~`8mdn?)#q~w992q}zyPˎWg6+K*}`%s 'D7xDqHi (»H:HbSd(P:1ݾt:;εs Tֻ2FaK!TQJde n  x2_uz+_]*56Ŭzb,dY*nYmf>O?.gSWv[З|lګϗp`@'(hB*ҫV [.ȗ2va/@snbx9bRRR(4( 6C-e\4ZqC YAn4_dzU=|;㗙{zAIoG* <32aG{%N\fP;X QsE1ZYM@<xruYK=6]\H9%rg.HSB!HG*>A`ECN00O+C꺏4?0?݁R 39T5L̓Ȍ` 8EiɡK ΉA[V]7z[X#?PHz FJQۖyx 6dKR$N7,T3L_FbbChz=!f=.m5[b-}tx߳u^?i_DSJBPǧ/]Grm_2_ߦWs0y lgLtպ=C. U_O\v=b<FD )뀚μ͚zzzcCŰMzίHt6M촹#7Z6xb6v2mwjV0̢>=<'n8.#!~ bѽhADf͵BJ{3ɋĭÎK+ӎy!1Y GG@<.齕QWt..2V #e=`)xKCڹvHf`,NlmnlUvp8wKТ^yr|ki9+e:AB7.ɔQ d# p(iiB۹Zݕѧ?g`{t;J1on}k $^'r:t50,U^QgS剒n[-5wUO1͸} f#tM#]BZ2'Z>j崩e{J%b-o?ǗvU3Uw+,WB0SLw^VRȁI Cl[%Y.Z U^{u'E,ֆvGV\( ::Gθ !LZdI<.Y~F#MKK˘aI~ɑ$XК u B4 pg~iti/ 2HJw$%Nqnɕc.*GQ•GLQ h@eD>{rv IAeo2C񍥁NC]8[-f,.a:fe#A+ mF؂oš}q"''[(!P2\KfCZ%۠ ,"Jc" L(ʘ됌 կ:!h[K7m"weIiS  &|mլTK/BmȺxDVYeO*»););b-#wdGhg+'Ɛ`"dtH{Z{n)(G(LA-HpR(X;km 9wkp,[ ݥH #U5M}_G_pF~'O3'ǠbHJHգؚd5'%9λ՜P 1d5?¬Z[X KzCW/tŖuj&7tEp ]ZcNWځd "`CWWvE(W@WCWh}+}+B;OW^ ]guhnqqژ=t!d90O߾{[MW {gUs^ < _~{ft4 xEW4ݷ;MJg>B]Pۢ+l_u"v*A+"]+Bi@WGHWImT {CWfώBAB):Fu>X}`'"tutep!{DW}+D+:]J>8p#]9`+zwEhMABiCWCNsECn6Bkفb%7C +]z΁;#BBզ/tEh:]J:B9czDWB0DoZBٮzc+C5e͏%_D2qn7mXɿu}u~<(k{5_5ףtdVG^Et?ӫ8}x/ |NovzC=̼RZj)el\{.)oЯt97q`&)UZ!6oKmh}!7ls)T߿qOþ(4;XĞg \yO(ÝFLkc4fǂ=U+|- iǎ+V'\#کpNrgR^Zq*{ L:\YKҸpJjpr} Z(۞U4vJփ+\-b#*-M:B\ykL[+W+S V0v\A Svu ևPUA8U d58Xc~ի۲ݩpÿ&o cWRl/7WnծMO>Hp W X ǎ+$W ꭔ^D-;]_^^_H0kܤ%^uQ+)~X ǛyL u4'pu`Og>λE$67 .iC S@h_p(7q=ǻ<{{f@@ُ2~ޞToQ9E ;;9Y?BZ"O: Jt>O.[z[tݕ꿟ڈ<Q4}q^ng~y~f=nFWTW* fijs{Uv:(2K}HfrJP?{8${k~ r % V$eR2jUp(ۛBvb̓ OVO:ztrOԜ7bjΑ$>@Uc= *XkɂvTn-9z>j=jd+F > Խђ[{ofۓ&97o~;pw ֽ3'|'Soo>5NU-u="SŋEԋt{ʇ~>}7waBUYc[k}ܒD0nj\kGF+V۩rʓ7͈ΌBLkTEb˵\Z?N0 aPky+~M@+kyjC;R3qe4#8o`]OvrM5b/ >WMWNZ"\`Skyj;XHjgd'\\-> 9P Vj\}̆}}I6jJ7 a„]JW+|S rHq*puZHc* T W ] XcTj1S-&T>xHg- zKߦ#9p*w 3b2*Obq#?zPs55g5Y (RcZPwlN8LVm㥂Vn_nl|5C ,׉ZX R9M./2PӋ6U+ Z+q*pu2Fx#+  rؓsObq*Ք]#Rה]Ű k*}gUrM:\9HRE`%D5bL:fNW2Є#ĕXQ *[ X],\C):F\%Մ+\Qgr3jǮXH\I]Kq*uk$ v>e\؉ Ն,ҋqb{\-pKӓ ^PEbAT+ ւ+Vq*puN)'*kiU\Zcǎ+V(qe38hX/X虶^ Sa6r+gHTiL\kj4Ӭrd&LLkהUBԾ\\W͘Ԫy>X4WKA"\`[Ϙu\0z\A%):F\YC)jpr|0ҚU1qizcrxMvX)_3[^ .1Hؼ(ZwMKVޞ۷+κ6EQiYز˒{@ϯCg5.㟿uw( nޯ?@䅸;n/w BgUV[_}m}Zhc俞/uϊ71r'{%ѱ~yۇQ<̥׷xY׏ܽvrlYYSoKo^|={/,GTG@Q_$U>^1;T?Y >C<̟W댔O4q~yi3 /3x muЧb}gM(QjƷW9ʲ!mh 5R>MƟw䥾+n%Io5+DtQx—߭h*he) 9Av (WV71QLBS6+{͵}n\Ml4D#T!ŤY(+Ŧ2e%[ 3։lcgh;*T(!peftlYҊ FW 9Өokh5dt e"7F֕KJ/|)J +$Q\K&ikݨUUJ5t}y )&[j[cRn5H8ek $Q 2)SE6$LKc.[S5pf4c+%X3r$P7ImsNN:CuY3$;%PMCA`:1IR Гa ;3l]D/ ,V~#XȤBK dR2>l( 92bFkj+Ex@ i~nwPSݠC2]6 ]F)>G,$@O%eAS&~E#DFG!4Pژ"PJSlDZ( IowaXs6׊/ykMܜȠ3 3H ԭԮk],: 8)|c@Tҡ88;#&_0}Luet,<97 ~ Vٜ|gT~H:OmZ@-x^"b:dfM@p<* qHRQh)s2 LlÜdtyE nX{.tV{PrJMB'1-kD^]Yoǚ+1oj_ 5&K&EmI] 9\ԶEɔ~vb[d/󝯪Nw(C`rQRaj*6:.0A)ԭLx$n`m mGEVgd! T?`yrxU;͊/v~ѯuo>.1q vyJ7 hhI;~W|z.f]Sh&VՇ]Ao7zl1/Wju_^ra/~h[V8Y}ٲ^i5]/,&ǣqnf4fMe5z+l50޵Z5B_ # 0 d>yݩ;qJEųt!+s@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNu҅L\ZsN h@g-#'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rl@gy(N w8NVSwUƒ @)3"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rrCr ='J7'j}N YrHz<9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@rkV6zn5ח͆/@S)yw?J &!‡d\`ˇc\p #Kɸtƥϛ1nj2.Ye8y q3tx3up!~QŭF< &iy,]~/}zqaLџw h!;ܕ Qoaf n|=OyAQTOE*x k= J'Xz^Χe[ltGͺ}RmB_ߟe9u٧tٯ]?)kK]1Mk.2qi㓚SGy z0ܵnUMoiEP"NQ+8OgyZD$ˍZ59tq#|糧iltHFR6WS7ǣz3 Ǻb$Cc-LW?|7ZWC3q>9G7cQP *g?RչM-T fjpcCZh?=gjA)c'l8tU 5h;u({DWCWZy)DWp•QWrNWDWgHWFsDW9].Ygh8u(':C>@c@tf@`+P誢=dY+0z@tҰUkP h<]|]|HWRI]Wb0tU֞|2XQ:Ct+M/.Q>Aap3][Pa(Ot%Xy UP hfNWDWgHWx6r0tUUE\PZƘ%:GrRn~]alŅֹ6w7ח[Z!̐ZCphSRr3:6z0tU*Z/OՌi3+m,Q~5\ *ZyIpEI]#]ǔkeCWQCU}U9s9 '\P誢骢Yn&DWN2=}0*ZsS%uutսuKh{l3wUړO+JGCW+^^0ϝ{d >Fg6V0L9 <1u%+Itئ^9DW+ *Z< eo5|J8ΔA*r0_`kzAP4tP T/L⋟s7CcrPs#s@j9A;w[yOFKs᭕hZ(7Wub/}wݼvMlٽAn>-f"n~*]m/ʠ{ۗ*LJHc kh<8iU!ɉ?_]J< ݿ/M/wIQ5orWgu7?u**4s,Ciy‡ b%ګw)$sIQ Bc9y]/ך )uH҇Yu7j$X%Ԋ-X&[)^iN]5B{OzwbzBt߰m/1cNo?؜5͛?>~2v$rWO}TY\w\mX\$ M48 y~>~Qf-e}G7ïǗcPq\ wZկ=l}}zKG-vqwСWQǏ%~*<Ђh/.9qBSZ u(yի+Z:+əwψ;IP+{1N6yO͖P$t&qޅ-+jrIgN9ZrIwB;R.mQmYςMŶY'4l] D݂w<]\LGMok},pD|&X/9ks)5O\}HvGNo6Mo|:Iz{2in7Os{"$4w O]dޓondq 5sҿm.~XɅ+ǘeI۬40ጉA̖ro#N:Xa&&r P.RYbpE)u1-טm,#yx 4RNgɌל[3f$SH*VgikY-Y/F &nR>mE3-wmH[NX#KfN8ۆzQrlnCtZ(=cǘF1z0IMEft^c@* Ր!U/FOF%w,s?~6?^fInwV֕ ]y{g]x }e_lgSr4--ҽo׫~TD}_G}FEGLmA3I5G>$B%F7erFT[JLRT_DZ$EMiaג8穩Zk>YF`YR!&%*aΉ6ojnmDWLte#‘GygWA&26rNX`s$6Z(!P| G* l!\i*Rr$̉LTz6A8='B/L MD5jw$Mf9X(9lQD.fcQer6vBvՕnL+?0 Gq|}n:inNΧ\ʮ^s8QW>0 ]4;8rS6BjiCobuMg3aLTt郷yWz}tWů0Drv@:3>{o仓|QAܯxxˣs'|Xj[/Jaη؟8UI_}X:<)CO6~0}5~`;z=Si đ6_ ļȴA|I`5 kBҴ e#D5gNU;@~N1Y&`i87#p'g#o |Rg E3p)ݡ @U4ח/;Gٛ# .pρI5T`!{dB30"KnMUCEN0dET\ȊzyDd$FH%JfceC&Æ&r8: 3 jzgF7_Kn3hs>gذ^LR+?\B!aMsh1yȝ4F:%r:Z59+44k*AV .1ٜLkyXTmXM*U*|a58ʾPpm "NrOhp:=6-SJ&9M2TN EN]Iq̧X(m $ AHQܦ&j.JY8G"#`ckaƣF1;`kiGWv`fe'ȅQ)RFbhұv# 15C|.;^ 0+b1XŁ2!CĢDC&!ZQp$EJ,F}eXMaKK0aq]Xh*{D{#xPO 0!'&(h@s ΒK]]H1]fҨ@r2B@dRs#HLZp(e/kW8&ң̥.:iɱ~U_[kWx,%L2ZJ[$H $&+ɋ')bWa5?t/5w/ ~|GKnHkm_Lw"c/!Ikzw"%ﰻa0ZkZ+YEn? Dۥ$|jU!=[3A]ћ&Mq'=}zҳM#5j4jh:ͥ;ܢ yc dB:ɒ(Q$'8 -C&򐸊.PNgTɻ"ͬSa2y/wPkvuk׳eY|.jx壹 j?0ʦ C[PT φdWۥg1lRDVԍ6PFΘSUx"*̩*ß^Vh*4ƛI!iɟjK/Wd_2i F뚲='$Y&$iQHML>3"$ՙg]3_BvTX(fZ닏uCl&G&h2e'XZ靴) s@Fަzݓpu3]9E>@Sg0!!rk3iMCo`jQQ[#o.rGGwRt;Qd5k=3yNGʙd f `)pkX@`#"Z1.d)ʙQ*+1#X,I0t@"o R?H(3B<'RĸjV޴4z8X6F75? iug9"R`KkMjm3b h)t>hʹTj^6{7$urk9)cr#$r,]gPKU,~K'{= U8ybEێw2uXڵg8Ȇ;ӊ8fkO?KL-iWgQ(TKAĽ%М%:RRh+8%H9R]C:ffԼV.dU;RB2-2&řGNѮ-z@w={/q0+HC3.0mmp3L0fTB[%g6-3N<_"|Taz#۫{~PHsd 'ȭсwϴLDJPc"Ϯ2tFN /]8n5CY91 K5e"3˭7 mLr}"͊w1|?_vp[GKtgm1^^KlY|U+ .gh.M "QvuixCkEw>yZnqMKx~mT=fv:8/fJ+p/.V{ !Ut3k#o.hus/qG`lLDvjLW;qvN,oƭ*-E=>ƣ卞9v_D Eu׵S^uZ븴2"x7S_yP"=?FyA5qx}Aʿo__?}υ}޼kZu@w`%,߾M ܛ]iSKY7yuK1}|ZB;5 ) ~(i8rb֣l[5)lX^Q埐dWm .hR)FJ !v5a.'ӖvsĠFɝZE dM;%ŲsL* e X),'=uazt)xkRt̤H0*'Bã\DzQ(KN:S\wA58AIOY`5Q9@ǾƃK;_]tIԝID;E|ׁJI4dht`RWڮu':zفܑ.sQF 5K/V hqI!^Fi(]g(5,*ע;Yge9Z1liކ|QDxPtRC'5k'u*4Kj)j&RL$ZSRJR D&4#fzbʅ/?}uHkPVleCbԇYQxqZ]L5o&cE_q q2RoEO;OM Ө +l?Yv7vJh|󯗳TCw[0 3%fA,YeYr"[MTfURu%.IVg>QM`&'GfLUu,y^ճ^_O&7yaŵb븋T]1X,%T㵍p/LDR3R\Gi_8i|w2;Mޝ|wU\G1`8:W|?j_ZKpAdxʧ B#gBz~X:9l'oZ >hjz}r 7µ$mzߖ[v w<8\[p, 7lw\gՏokYg핟F 5Z]5;7|fT};Qe ~{8/q[4=`6V_& Eu٪['~Kԕ#ly\/= ,)/H6:+jaDK(s'Kqܥg(tQ{7:FdŜ'W :,I> jTbp޸=`g1OoNa>^=x8p]kZJO#˹ Fq1YZ%N?QoD.$)֔awF:{aߖa ##,g8'1 $a6)ɕw`@TCDY'ec}@;oWg!<M>\W {;ݴi+պKH辰6K•t2O73y^%ʢN>1E ([FVf'Y<0.pޟ/cQGσho-<ӄO\ Eߓ) V6e[L{GB}b̮t82-K0 Yj?/^iwE \>zVFYq)\61ew;z7@7r"MF7`|p;M1ލ#h^_ДՆ>qu9iTmw&J'ESs C_8 彜 oiB}7+,>j47^W3j5n^mrE\u[H{['m\ulE5/zS7qY]a$#]2_?s+͸gr|xrrM8t󑜯nbF6~Xu2M/<\5BKܤ^~6~|~ޤOqHmlgLO)u{К!O jM~J{f3}o'O'O="!eAi+>xAm"4ΠM\OxqUΦ6])=r_toot֭d#.<f[OrɇO/<0BH߅Z=N rTu"Js)}dR9E1ѱHUzK'i"h# ~[qΈ MqT63q<2zDh*Y2!=!׽*تM]-Kza﹪ϩө1IC"%r@ e4vTrSb ੥&F{Q+Sӡt/:5VX䦷jLY5,}Dqz"akXj#Tۊ렌BrR 2d}2b, KAw5u<"Ħv[X_.3 _KG5eN~>a_WLnêR9˻绞$pQ\t $<\먚CZ.Pnh܃ )4D33"8HngۻD9{+SB ST%sx")30<)`30mX#q.Pȉ~,e&im W!, &Lgyr5嬑@>'&*۾,w-a20g%w$iB9ǹ1ܤVe]2Q$0d0(DmKoϰW#7N6jqA)F3XJ"==8 AR'5?NTnzc{ K굴s";'90a0-Xݒ 6r !UkHvpn;bUt髯q$_A& Bxނ"Hc:J0PŐdʊ+(L۔u[munj^0NE]2v=suR]Ihx^z[g&dvR`ɵ[ڸʉM}gmm^ai%:pKJjR2D'a]0DEp&4 d:gPJ3$hJr"r#v\~2N-kvp׋4n.F-ZU />ɯ^+g˥ZX1I^r-5S4pfr&^]B\L%q6UVhy߬AA;T+)np)9Cq#,nLDWbJC(m Qj;ҕؒP Ft1tp9)$}+D@WGHWrMdAtu9 µ5zWCWb͡jJj3fp=,]mvk* Jl@WbmԼ b rY ]!Z{BtutDWb J+D+H QJ61ҕQMХU./v*M zm̞rXD6)y~7u[O6G08wdMY M=#S XD}{ƃ DؐrhJZ M#Z-Nӈ4}4-Ѵ$BCWR ]!Z{#Q =ҕT+Z]I .FBF]!]).aDWr++X)thM+͍%/+F#h;]JK@WGHWFY]`ex1tp tDW"]!]Y!,)< ++M)thM]R5ի+S"27voר0tJKiJn@WrmJJ$+0BBWvߺt(= tuLo."E 붐BxB Q>鹖\y cٝ (>=ZPx\/mTӏWW[{>ڬl"Ğ~ONlԒ'Z:ÜL 'wvovdxyI@7!mi .tZLHʈ:kͦ2@ 2]7pu^ m39Ϊںk,r L 0|z3fLnOiZ*sNY<gaiZK(%c"d0,5 B|xR'K!M5rϓ]לϠIΕ ӪMˎ7'Ӟwat4t~aUI ͅ1YiXQגh e.[wSo[2 !GiMeEgDf`:=sӥGTVVmv;ۋ-$פ FGݫ(%ZGh) +lD9Nx[;ew Nc+U.PR8 \SLDdz#+4%8`%U1tp )&t(Jkʕ.0R j{cQ!h!)-Gc "Z{Br06 ]\&D;ԚCND4F)=eJ3cPm@WjmA-0eBBBWоdJ tutŌDWبb誉et((+OQ;~ڽZ{ %[]遮zyTAt_w]!\cK+@;]!%`+E ~\CWUt(PT&5t6zfOd)6f_y0X;yFV,%lF\ r(0},gpjÇ,P&ʄK'zW_VY}ܵmviUlZgU!KVCS'j_y Z5uNpNԊ0݁W&]\s|-z!~ԧ`vuOr%Ͳ֦/-J>N\ ;;yf4Rq76ٽ *io7&<3'7yf/"qwY T .W!K3J4 "\Z޽R>@\EM+,K̂+UvYdq%)4 ؆yP2U|Pսq*pW$.:V2ڕM2 RҵJUcO+2Oz2|n\MgW0TS mfS 6 *9KUɚ4T ̓+U{cW%JWr0TYpjWP)tq7q3ly\ܐfM *=D1Z;҂ nYZZ҂9.pix;Ӊoȭpz jWPk5/T9N\"<]yRgVnwjԸu*b.Wp$7T̃+g,FU.W[ κ/_?,A;Tyuww͍$[M;-z?|?DEP-;NGid~+PG=1_:4Ч_w%ܼc.7F?=A*V;曫"XR跧=BO7wN/ B.OOO ßd}?N什~=|~LWwyF(:R懷YU;O P0/T~c>{?[>@x^O]̔__>c!޼EC3G)~;w Cj翼~ Cv>\ԳcRn% ԲJ@t[(m6W"Q?N߇7~DŽ;?|@r} zw{\l1Ɏw%B7GSRZ \B6O( $>GϗVbr%ɴTE\LnqKi6rp`4'U]Nu~d}g:*'*7]dQie.vyoT0yv}R\uW0t6FCˈ|wic./RشAĪ j +"pu ,,6(Xsv, mM1gT ]fc;6 naz=b I5$մIzFvSFvY5n(a/{ |˩4dsC.E`Lj > il4ŒO1?L>$uN;J0|8>]tJbe J:/g3.)|e 'WͰh.c6f~Jݬnv q|FCcU'?;$l/:%X@ǔʸK VI3^Tc^yz?'m3.. q΋>X d-]\{¶!66?w/'W0yZr-@md+NENN>q \fU% IniNHu|vTYˁL;*1VVNL*7a8JF-j=js_/y_+) NɟBC/ϧɫ75+v}YC6]+Di<uve1@n"ZyN}}7C^`/th:]!Jv+ ]!ZvDi8wCW]/O>V۟kB+Z9Cٵ`PnAWj׮m]` MFqQ'‹RsBi-*Nń]kw|g3^djX6̮ C203lXw:<gXһ?8j^yZ(R%YH*-z%>*q~^{Ynea;0duIFpe\,7V"˞[#rٳx,%Ky2oR2#2B)V.Ӑ[rnc6 6AK|Ԇq<>v̀SÙFkQzZV*;2#z4 u RfVE.4`'e6/zRmkWðܻ? :)TyR<6-vE:ekU(^@f z;0]$TCOh/uVP`6~a-U.pm'p5jSBNpj4aqf|í8Yeuªe??Ow |o}w nN,qihbޖZna:iK'H2+ʜsxy^XH'M)U U򌱰9Z멪]r=x2E+(Yޖ\geyy<~&VmK>Y뼗[5=?4? Ku- 'V8Gc;W?4Guv.|Ylzkvf"fEƆf.).LcrteތGwΆ?ۗ??gx.I@:G6^rN0/_Uծg@n/wZaBn36I_g*+7.(yT<[8Y`p=~H+:ϢKk J㼖Yf2q sksƹJ >eW*1nd*͌03W92vBs`.ąOʅKDXzg!qLpzY167~}CFOa}w1 @LYG9STȭBht%x/a}YiK̀^y.@ڔFV`WR(`0cqh`e]amCM}_D܇;X(##K&x!RϕL(2K Qc;DrQt! FD #pԁTG[|@>< 't: +x0ؕ##g.8x=ъ B#-ιuQG wqX*ڥ-aB ܗr%L)&1U|H.r'Q7B8קKЙX*KY1HނeAA .w o<ɧ0/8aAѬ]-9H>k:5q|FCaU<2]I𙬄!Kہ=S&;ُTGOϪ)~}|G 0itOU47J =2S"w||G|ǝ14q{|ǝ>ߑˬ`ܻ"F;]åKY03XyL rE!v<ց.3w)>-[>7~!aR|y&_}oaοa{heUt`*/C3kVܯm}֧q[fǧBu:s:F:r-|ϓq;4aTtgEz9|mBբzs{*Pe7P"%]UyY)R pq7]`,+jo+Y5wϲ lnʗpaq]mli0ʙ݇]\q;]Ke2n˘E&PzC 3Nz)y Z„ Lss=}EGRL!Qɼ^k2Z֗ zQE{^I#܍m~R.ю Kmgj`^TbƮ3fQ Bm7~\p/xYULԦ.._%Y|^+SpZbؐo \ioؠ ߚT>3iY'EwՃinjL.^3vj ל OO:En{W&ts΅zq h!d4uF#;m4J+npkzIdJmm5˖\S&=kY{#? m{'!\'Ze$D)ic4cV'Z<+q2={k$h 6O{IР#Κ K#5N[2]w' (OiU0^q`m`/;k-j>aEOknzf0nm-v\S?8h&Xm|mٖPj1z[jxQ۶< mk`0ʷe;+2.k{~`-s5Ho@k% (~9[0 '?!*Bxj߱/Ż=gƭOylǻe” R'p O'#,Ƴ ߳+m\ oݳE<:mUX!oo[?eUY鶵UK)OǪp&}J|-?^20|Z泥/_X/:LeywϸfZjђkJ0 \f+=c7:9~*uoQO ohuk}:B7"ZϻNWR≮c̘?)WެZU Q:OtutGtVR<.<ޕwqRzuʜsF<;]mst\)Cv(uǼ+])]sGtŽ`}+@˙:]!ʖJtum<}z+U`Toh^3yP}b &4-U+ ]!\BWVvDWGHW{t iN+Y_ İJKAwqF+Si.4^TKm6KrRY(CS#]B#hVJl!6SUN"`cp]Z+QB]MQWV#]s3Ha"J&+g;uEk*Z&hJ]WDu5A]yJs]AR+z\@2lȺõ#?#(HLW+̺zhJ`߸8` ltENsz=̺5#]lt]JQf]MHWDocq/Z}. W#/Np.+u2:[6wھ^7WeCWUqy mˏ O0 _oO߹נԭtڅaxtv~ږg:zoB#ƍ>޼ch% еE/Zlt_ݛu<*NWQ}Sٻ>lEfS|^zYeQ.(]v`R*tMe[+/e-;uSӻkñ 0j]bs~vcmoNOc uKPWUj# IJuQ;'Kl%ye'ۂkU_vn*AnS7 Rh]+jQ/E1E +46!<+됉@ڥW}a^᭶@n%g&pP<0-R|tEf&4j!SQJu5A]tXF Oluz."ZeRQV,YteXVi1i>ϙ "]*|Z QJu5A]Y#]Yej6"\k*ЂH~tEf]MPWa"blG_pjҘ *N*8[GWk]K~t(sW/GWf`Bh'sWQϱuG]Ej7*eb &BW&ꡡ#]`|tEVc"J4YWԕD IWFW;v8ZSQ:u5E]yI7_0%.ޣxo6v=MڮM1\4MO (l5ѧPT9 ]d+6u]ejEEWht:p%`GWDiMueuq̮ $]v\tEN@ȺV8F"`p@K]WDiuEa+ltEJ:jd0AsbfDi1 ]Qc]KhH(]bYW =83UVp+5ҩ J:iez^x&a.IkkGgX"& i1h19;2jrB#]ltpP\tE`SQVO*YtTSqg6ID[@g]]w !R*xwqnӢhiIWLlEe]=4 ,$cs]=YN"ZmRQu5A]I2ZQh6"\FWVKR۬)Jի['O뫃6Yѭ2tTmZ(ieY"Եh|k^#JuD}{2n;:-}?NէSq_6OeiC_v A~MgVq>98p4$"itˢ5Vy,T-Y ?fW,$;Xǣ"lp3!kP -6v+{>74 6/Dk!J'a# F ᎝;Gk\"Jg&+:؋Sp+2u]Ju5A] #]`]\tE>_*j:RYf0;!pU VuEsp.t36p%+ɏ]u JRg]] $&Pl# ،^l#W8ڱv:]:DF ^ 6"\hRQu5A]Ѭ䤫lf+•֨uEU^̺z]9YTLoy\u3]clo\OD{e?ǦULdA6ΑeT** gi"x&5I}ZHZ*+P[iB2Z ` :ej!:SZ JFBFW< QVq2YtTL0vp+.u]J9jz2&zQVI!\`+Uɧi1SԕE-=AEEWDkO:&ޞYWѕClb+E+5]sWSԕtsHWl$]e3"Zҫ+z-Hۈ6v8\ nIti@b._uЃ1 #]3ltpǮkI+]"Jm&+iԜFW_8u]\tE>y]Jyt5I]yzO$WOJ}iYx3tzO6D^pUl4M#WUB5M5<iTG_|둋3"u]YWZ/FW+m.+>u]YWԕgIҏ+EWDQe]MGWV 8]s3H#uB+TYWSԕJ[N9Ǵױ $GʺP(8 Ph*xEif]MQW>t>UNgMsZ9/돗y TQF@-M²<]QO)©:ԜͿΫu~ͯ눤s_V}s9kǼ UtmU~޼<|8>gݫy+?3!_GGYmxM+0^rP" y |k^[*Aށs^"Ԯw_NW߼.^˳.fWubSܻs^:6Gs'O~x./ϛWt3e}جˮ :Hk֞NB˦-/)"AAqySmyN)φA.f'|3PqZmto{'0m`A;4Uy6_EE`oI]{} 2x9k^ խ q-Rp^n>Hc:[$ðh9r9H{mz's}3[j%AEw#tikbOa_zur:(a4 y/ȻoΏ/ᚰ9_q!/:,agOz?7z}t~>ӣ]4ßN>\7Y9ܫRX/eQ.uTw|X q >$Rȳ*.ΖA0gU|X~KTo/7c Iy6,P>O ߺ Gs'=+|ʳf"=;_L;;r`BнLo7&qÒ U>,fOvz廄'x[<Hm8vu Efr6[_KIz7}e >Szk/ryq2#:0v=V,ZاRzXI@KHDƥ1 ľa =]m{whA3Rn wGL&mGw96=d`Y{ݳн)'yI6n49_9_Ƿ~ʺ={&7A™aWI'/oMkZVmu[7mTHUIԵTJдшIČ:RCQQyY7ui|-n6ԶR AYkcéҕ~X.6Sfrp;f5y^BSv?U@E(ѷn}^w|ۧwCΉχmW׺A'ڭиr z3_vy0-RwjՂͼS\@rI=`>T=ՙoiwSLAvQNqJ"*RN~ӈ8p5.1o5ƀ\qT{6oC/qz8z2Gu_0꽲_"^>cցkQv7&Q$ҥRim'Cm%b iٻ6rdW<=cުH`Ia>`W[nd;?edVӃL,9U *V-[L&S$SmY^޳) oc.o,'7K%dlҔ䨬$1N eKCШ FeSfE96*CT~}(%X N? )2IiQk)?g*iPͮYq_OW'r˅Z{if5B nGJO]%+\Z"Β٤l)bJYEj|rDcՌЗڛDc5eݦ!c72EFRjȞ1S_hVGkA}Oh{Q%jixRs[`!3f/ePs/j&4,A}LMRqЛxr53ÚkYPqZʙ.`n iƗ؛Mfj38o$Y%i$eCTC݇w0ϪkiɁm;YSRʘMvf%]ZR%&i>Ovh!_&fm1}|_ܷ1RJObBuDrߒT&Nʸ'K@u5^U\b]iJY2G~Mvw`:ҍmXn VB(v#۠bm8.Vu'@O遭*!hK61'O]HÖt(OG+)FwGRhi@gdҖ/Ë㌜6JJY2-%Y՚:%yNZ5kerL*NjByL9p糧('9a\F8[n8AHEt2N't",rJ1XL;u xRar{ eRJ4]B/Ky% -b{$Ӻۅԧ *(qDb Ji@hrA:X/Z/h2-740x[w2 gX<:TI~e ]ѿ @=NdM&N\-qHR:<\~JA~=S2 Bx:{3R6R%>Zc7iTGU U?V6X@O&ua -qX=D'/sn7:(dQ ev `Gk+ lytb ^Cns'TbW 5F}b+7;ćJ HGS--(V7M@DweBnO7 d'H=+f]آV0 J &㻩^7Z;;N[. %_=]&ꏋPޱeS8)omxi~]7@PD'$e:k"1Ɔ^4<әN%;6B;>":/z+ 2ErdW!s7 W%r;wZӻJ~ypm;hq/Qg ޫ#4qRܝ*^d8LJG~@&\NF:0(V Uq6@b(j{ vsSLP1S۸Pp-LJ{OC/ B-݊`s~Т;`tQ$#ep8-|1Ų9qjz>ey\ A+݋}gMu9-D#y a|uu!J[bz@IC9H@ $y"VՎr -ʓ#rh !ꎄmCiaCç9 Wuhz(? + 1sSPqEuΦ\߲LT{eh gJ]tgH a(HCǨdH 0>DrtzAkKBRCOlCgM ;mhA M`Y3ʨK ͲbAoowbH L2*XT>4In ];b ഐ:h4oF| яL !} @bbKY+%TCE9TRsa`98 b v] (t5u6xF crm9VZK`)k]T Mz+jt g4=g ,hgLA.X[݈TOtu"!5 VfzݍdȦQRAe<&d<M8@mUġ 8eX4PjGJwXh&af $9hm1mwaboTAycg\lT``\TUng_$ʨcc|q4BmQ/x~.<t{vR(FH"0K| 3/;heiN$ /XL^ po&WU\9FcQ6yNYYT_v7+yıtуq<<8llZ0*>{{=EAEGӁ `]'MewIvmjliT,t\T(~T0Z@W|2w#dz}Kl WH ^X 8xWG>'@πƄj%NJ5 SbFz 0"MKٞv\j[)%ljSi0ЁA׭7akLHq\T3xS AŜ(̺48Q1׹^ҤęˆuPg^$^-}K&R`1/6ʟ_\W7?n_,.KnYY '*=^c AR@`@aTͼ'T1縌!ZXd-,:GMTbN]Ӓ ЛJA~>+y|"NTʥ uȻ(:|{KR(NZfbc(XjhM@ 5L{KtUV rEM񅝜:oIܵC4{v8ETϚRH)(y} SEP.t6Rm|`.R/IEu%!*j $E/̽g&: n pEM/X/\)Ա ᇠ PUڑFtήi*ŽǼLw_($_9BΒM'a2ToryB0 1G}ߗ;Vag('vIF a(f,GuF 1͊ gIo8O Y:X~䛪-'dWhWΙm:LJ9RWws՝mS6g̅=?·?͆lW4Z?ٲ/`8^t5 &`:7~Vp5zJP E|%4z𠔞TSH]pO!~)^Ql]|FOD* Fx+G[Jvq 1w_٦~V_?cw_??8,:cWe11V%>Kj)_0s_/ؐ)BG4<-VvEp*ص}%O90a#E" U4 uTENI]E_zDP))[xҗI^r $F^'3> 'a:c~?)\ _nl: +qvg} W$'tN2[:t#[ve:\AT>0AǙ[7vKIn&U)D[詻DV;ʩ!PRNLZ vZWcQc1La(E!i!*y?x4E PH9AD519^"lPglM$`+ÂxN1fTu`/~uzyb6}%1 rN%$\{bsDS̱k_DFCZBxH nf]"U)AZ˦맓ʤ% ZyE(O0Poje͇`-`(9bT$.C#ayYB2@SUY{J I͓!Cй].VWmAK Wݕ{ON$u+r^0LJyj~UtTi>d78V&BVv.t!t%.Ϧ.6aWN)gQ`U0A>)N OcCW46hrp?|˛X >.yMFnw~=4 ,ʀ%N*Ͳ:fJ`B @ T ke67wtHC)z2XU Ү"g*l'W`< W'HґeʀDz y \ZIRv)(qSD1]{CgYPJ>{BtW"yS-U5 aZPc YL6C[NPPu X ȵv/k||/1#}-l-lTUblFQW ~[noXBR*SF;L!NTc{UppKy=7Yq/ԃ;")oxfDHB`PK{5Ü Rkۇ|˗u7xOn /|3ꁬƧx/+uGSQ9.m`9*OzꙍQҙȔBE$F l-^{+axϤzRz=x?[i-GWјHqU94I %8@<;'BFA~]UMAzMFn) 1Dm Zh1H(櫖aPP_E"yt˨կ`'ZdL'qAڻ!P[Ѷ<_Dz5*V>%}lC C_t6_HXg"DD(M4G R8z6Pކų<*1$YHC^-PL7gmYCIGڄ;t8s^=(›)^`cZ&ろGs~ЋR%A5 Hҡx#@UJ6 IP" /OZ|Q$ƛQoukǷk7^'ƒo~;Un[ouZTR W#?X6sQϮ*B X{! i"L&OUlHJD nCi "'Xq q⍸_v=E5co\se*j@w=K3QZ $B. nġjַd7N&.{,i塔]YwsCY).m\/6 7$L:qvƖ"'S[T"ͮ˟t'iao Ggj8p_T\o-.Df-%*"9"8^A; X5 ز{MU 3`"8= 5(f`yk$`,(HN Xőe:,)״up썠`! k$X:ȸ5R"ޏ.U[`_3~hJd|p)x2;t:"oavqӹ|W$ j,* OfY"L)0*sDZD`!̤M$9\!(CɧͪzG5\JdEH]3h=޸T!MPF @!98hk|?%%\!QB?%gQOLw6ID=XʶT5p5}6A5s\0yGMx> %إ%{cS](KTByPۚrOpa^t:9R,벑OrbWpA[ { 6Xσ|9,,'sQ[k`K5W.wm<f7i4FS z'fD?bhӏ[.Os%#f?F']|s=y7M7-j_w- c~v)7noث)X^>?4h)rEcXG1U5#,!|hEO ] :q֮]B6 0pn8h{_3*̩yUl%c+oWW~¾7lֺzNT0?}a1WD XUp\;d1؉8N8ڤD:,u8:C fb"PwR9aaZȻ">e: T U8go$: ~O"CJ`xF1DY∵23FOM8]JgI`0 F ш SBd-C8;so>@xdPC^B("ipuF磵l46s._\LT"D :v dWPt7w [_T3.B\퇬uTB53zOH[ђx~"ggźgGAs7mN1LX+1CR!M-Ќ()яdž et\ȌKV|A:ɭLp\ y<N$k7oOE'\4IQ'q5l!B;q8ezqihkqzZX4}5ka՝zw8!-)rC2 !xk\'$y{#)BFQ%2.QsvΏj1DD:(DsNJ*x~7,ȑ"uM[S{# RJΗ#WvƵ9ūjs˭.TA`epT`JhApPD`]O@;? p%IDF0*ƕ`<9}01ׂK3ޱtDWN-NQwG;U/C Aڔ]PJM߿2NK5uXiOHB}O X3o,I7f0W]/ ,_('hb=TnĞ$yJ$oaVTHNug D|/p|^ڍ4~MZ8E!0Pk/dCTR:Oٕ6lIcN(_x%ҕJG%(^ cQƁjy];+42 ɞ(49#Z};KPtf4|b&E6K N8m5aM&6,GmX fQO򌓞f-aIe b6fgf'uX<3"sXs=|nD$]YwgSO/-~,z{V#I{qJp} b1ڪjbqݼR=}'2 G6*# )"8SY!OB rE"Cja]OQcE  GA(%2zU֚$7O3zJ:7Yvw;Aj: aU7g)p<4^Kꮐ `ߵlcItOen5)ы?^_]e ~x10m<  'tgnP8J9ݒ5sk-sYl)".q2u,sܢ2!ofh},9zk9w!2멑h-l6VoDGuq2ŎZ3V[X1&ƂJ,_>)ȷ_~RYa _d!RrI;m̱Rz3ԠlޕyѶH^DBk? hG~ ՃKd\VעТ,$KVx2-T\& s+dFXt86~+&}b~dZY"!7lZ)y!1~tŋSV"F挓|nhޗ%W`h`hO;\><2 ")n@- bFQT!-< XQR0GC◛~eVK!1zm WN7NPzjK+>EiUAm,C'O p1K.*'b[}>Q1klV^MP*oh5۶Dǣg5<%;H!MPFq8H(b LߊJwJJ$Mz EIMeI ꥮ)FEkEՔqʵֳY,K*XbK-a-,ZgZIcLa>RW0\Y Xb޺UT(.4,0-_Fl4=`-9(gH foѱsp0%X.qU> eσ|a_^_~Im"I:֛`MSݠ qxˎ:zSG0[l2ZͿX/BFzD[?7Qr&4 ~V #DzU-U WoO?Nn5pE?˵K #f?F$m8մ;7xn6j=?DW++Z'=bk|\>ha!cv SK8 z&YcdoR3)*TF)Q XsC7VҌfFz8,0!m e=|{xօ;*D78DvGgH8o!U :"iЩ8˽咺{m pFȸHPmZLATǘ>{H\5fڪ:sԣP(s G\9m "d$Fŝ ;ԯ 6)<߷Lbቋ)콗ϳ 񬀭Б nQh0J I 14t4C%2 xh#@Tl|}woji5xUo:֤ڢX%,ÊSgmw"zXEǏ/nu%b)Bʅs#B lSXf=$uP2Dk*Yl%JJ`]㼠So8#گ. WF]*":Lo_SD%jtǟ4{hU KϨMmt1bL>%)-9tz5bH;k R{wXz-i|N c'$a$ja.EB1ΰtW/ʛo*.4qB|'zm]|t2&aZc%ogk03I!D:q!G(לj]c2phPDF!"#~b1\(7HO2B85,G&[4_9C[{:RUT=ʦ "\Uh}>*ȰlOK[C_̧@d@ 5iL9L"tJ"xGnGCS!GoJMCRX9pBv[8dl5Qx]YiI;ĕ鑣3k-K%SK.A=Ӛ=J_Wr^":&3烘,6FX+e`f L:*:BNooEJ'ɭEO=\rî9S*C]ZH pޑƄtXAc!PFfTq 9ljwq~yTT"g')&>sZŇa9\*aakƑ%Z!”b c2Jd) ie&- DMiY2]Y+jFͷ*=*Jd8o2CΒ{/Q|IL3~[/{srn@ōJ`g݇?\AFKd\VQТmݔx6͛e:ti[&GU߂=WƧ8޽!"-۟A 1M?HW{v/ױVAA5a1v|G *om7^wo6i_I֜uUAOr.>X6 ^{tfW @+"`q: gXXkq՚؍֕_;C_0hv z&<"zج'wB8bp{ dm$G OY=JGxw7!), @,AliBó*3+ʃhqZ;a-?Z,MZ#ӛ.V$x<Ok`E0s2Fe"Y5fhGQL㬪=, е2Y5V'ul[hDOS E_1n:φt*T+⤫trKqrw+[s0M#GaC-r:9s%'LyGMQW΅{$fË#Ua[|y_ֈ~~P$\j{|B=bƞA ] ӖJE]0I}I4KƢw-0AQ<AWT=.r%bp ܄&{+1zƔg #b]5z0r[6?u:SGE\[ʍ[}f6#k;uCQj.CWI> `W5}Yi%e[`{Øv6%Kz~f,:5};kWF,ܲ=0z 97âRo8EaB:uG]D/HEa[`G~Gv\fTy([Mr owh}ewfsss򖞛YL@9$F5,$ڼsWbQʡ> desW`4s'׮4ӕW}_4\~p+>%b?2uG0L^Y)9Oч 2}TA񾔱IrIM`T%)fzRDJY&=+1{>em%H!ck!E_ܢcN'Vž"L1Ӷg LN~DR:ק: ӧuttL'0g pfo“[4z0v5DE99g"kpN:>ݰi'm ƻVŜ2^W%7;Em9nHIē5@U!Tax ùX?9//JczwTmݯ2_[KthzޤI.AE6ػRjiϮ5tpG0ܧ_͐/:?T΍j 68$_< #f?aoh6>0J]xk:Ƴ0 6L=O")Ԝܯ'|YɈ+C N\?Ml,w_m;H⼲:&)Kf4S""jJ/6_Mp3m{;js>~6#)6<8$Y";QcG66?|^Nfi]>[!F$ uJ 021"FŔʒ9TN!ZVŏb$ GL7ŗX?=(ys%~G:$ NNR%N/gq.Dd{+B1O$BX{\gd_`ϓxa±E; wa[U-f5m <[? > 668y`&}?a&q2""ᯣ0 fq04 _t@ tTߍjZid|E tM#՚~P8.6huG,r%DGs%AA*'$P!F1ɼh`(Hj~jw/dz(DŽV`=R+&Y~kJqĐ6:"+i)ĉKE-Z0EyFV+qdayoxR 'Z+nQ(IIU+:(* a`D1bCpi"8]A+?DTWb]QPvY#J8HKŀչ̅3)URx$epU8:qc4aFKZc ?Ga0@hdEA[EAQSc$!*b9"\fa<ͯ\Qy٨ ς+5d驛aߋŵ&In> \f7ggRN)aiEŏ}iЇ(t﫣qBnljS/Lp6? 2|'#PHS퉗yʢRf?dJ 10Y.R(pH 0j..`4},Une%80ND; 6r/L2E  KHHڴM Nr\b-B&.3,[ހr$)ejTȼTٔ3$gQIŨW7Lr4R ceCVLxX;rpc Q9"0v.DF #ƭ+-$I[J _kdL A3άvA%ebXsq@G3HEGo_HZ i!c5#UW&$U {-AeSH;0 %%p3z|}7B4x(h- Y*s7u?٩KQڈD!}#r^$8DVr.>Sp㈱ Lid.X'( iY5p|u7Wn U@=~.xMl,}ԴUfD.ZZox SM3¶%)~)w!6X2n4ˬ/?ohۦelzy>_;c:Zo Y-ZxZKchK@UkG"0Txׂn@|y˻=d6@ *]O$1DDL9[2EQ 'oK"YV:4 ݧyϧC'"M>E9#U^\.zSՑ[<-;y{YDh)e׶IFHKSb:)) [=5P~\sGw+Z.w9/2pqq_p LnjzU7EV,wn!}C_P+ތ̼ ͹=Ef+i̓HFLyJ 3{NCrDyĥqrCw&fPc"4G.V@z&̱/F)WWa,K|rnx=J+:TO^L#Iaak+-z3b'{d#cij1ۑ|7Z]+pƁ329tX5`E*!j o3%72-SOĢ:Gw_F#U  es=t0F-;>kf(# Zȝ L\yPd&\!GrQ0 T$oBnA*]I&TE^؂xӘU@:kJtMX0 N;m*nHSP~6E,(1NvLKXzMÈ `| lZ @YKmфFWѽ¤]ߖ|d ~]>nTTcx}# k(}x rVH&ؽ}X`D))}v80ս8Sy\PvN"F N t5ȚIތ^=xI%,eXCK5٤ޮ$߀dD0E;I^k@!8 {6d /b՗}X`1<{ѧ$:LR`HqCr8M R80[4WFdKlq#t~:O XU5ۋVXA1[A7bm+I -h/{4kL%oa!-YOQnmSn݁g)2+'.mwCgk~쾚&TTW,דix&p r$K"Mef™L#z6NGJ8stHޛ_~|6^O7D/G#rD` ؟\;ڠր$p9rM{3k m<x۵CT~RLmbz, Qf$Jecu*)4Le5^'aܳEEc'2^V5:v݀TCħnPPN +h3 m+ZaȌjN0`ClyoGc7jO.Ƿ_>U}5o9uQMHiX둕8!V-0-Cp<(|Iu?I)uQ"z]z݋K;>Eޮh.FvdkD*O#$RX5qߗ0 J%QPJoU$< v]9_*MOFiM~2ruu7~oru3餤U?9&H豳_|p@^׀,RA`RT7^eAxu{9r-m+!ˬ&e$W 'ܷL L v(b-zI&NOh8BB'Y D G=RD L:5J|635B{4JpJb"+Z"U0dr7>J Qֆlm3Vy KYF/,sМ}~[ + LfzW7I4EYTܹǃe"=bPCG5܍zp[,EE|3#tUU&*~>iLUpK42Zt7CPI!2០>w$*INe&7#Io_є 8ymIPm?Odժ22 ЀĊP g2L%-0V.Ex񔍖3_ޣSViUlNxwqy/!`ό&Lh-4Z:G^}Ry\~1?}}۲:_[UN.W١%#TIM9a1D;a,]iz=B6Ѥ/BxvZr/ΠFJ.>ӊI0 DGϢ 38²9-%@s"S+Ԫ|PB%-PVV%y4~ZhRV.sьC &)pӑ*VN 浓>jebr\V2nMR1HkYQ{5+-+hbuʩj{Qd}u5clǬ0&86%-3Oc!e%pNpaպk퐦 -]I%'ftpF[1ƺVb |=hE-T !x5,&vPMv|YLez:3W˚&c}-ݕ(R_̽ƍ'ӱ#m# 'dR9XMu^bq2XWZSX&[ p4p\+G( ^Fm=.~ "[?0/BҙpH0V]:$gYQģ6i`0!ZGq|vAU#5w$;xTAGr'UJJp$aZ.QkL  6xJw?2VR7H0uVF۬9e2O_gmvĝ,lAr;҈f#zspjo &D3fi"kqN&o-?iDqS~q 72-)g7j9J{r]{|B/C,fraߏtNc ӑSt!gT CF j=)"T]H loLhmMj/zZq%xieKvhE#K,K4 H T†as+ys:J<ȥZ_֞%ya7!0v<'U2g#"%.<{Xo2q!Nɂ]nN/əjp0*mbm`(z@7 9PMl) *fmx&u;d*x&&{3(Ygd ujf-@ttVk#g< q )z٢_)lUlAPw[_?~L{36[za GyH3FQ_si7aà9 ϯHv .e __d*׎K& CƋlޜJB돲3wL{|g>[{M"0=P,fc:9ӏ@oϧgAizI\km݋lÐPoʉ:#̙X1!#sԬa,xL*MDY Y||VFTcW@5yowRю QQކj8G R$˃":pP 3prUg)وŬau6,Άeٰ:*a SrKf(KeE*u8ȳ[K{M5{M2m ^ -9Hs@ ) ;l=9 y6e %k@۹o*k$oFoV.;O/cNjJ/G[ن \K/a}X v,H5(j YU<,w/c*(𞜽]_uXkh վT}PopAiCljME-S&.OwhHj2F'"!i9--@*3Riqn-RpIGVрYJW+ j=BiF x< 3uwZx/\Ws[LjFpDV*R~ӡ.SR'&.I$"9㰻$V,bESvm$D$эMIM;4 Ǫ+EZα9Va}lAIksZ5޹H% s `I4%=fZSr ŎlB䒗<\},27yAMq tBXSjbG\2ܫ%a@PGB) a]aخ,67.M$Yi(ڽM^VKEEӝi%D]B L>iƧ6vf:jׁHGhk6yeB${_UIl@%!yKJI:/TZSomPAIӀVZb^AQJzl41qjedy$*VhpTh0V+Rb"sBXUJ] tœB\߈ԲL/(Ѫ[{Pl|bnIF䂑xqNt &:3q!9V$!Gb$TBp bB]g:τB4Bڡ"xw.l}V\GL(JLK;4mTh)xND͞ 5+@m|dv7uJS*Zi{h|}b X6ʜCv4mN`]n@2Jncl+Yйa}(ND^.*ԣ'2^6ala^7h=JQQ?lF-Ŷ()*Um?;RucceiL{Ud)R 3>us=ܼ 12Eg6=8]kL>$hxCLNGp6H/.0.fk5jD_TT dzx1[|S]f 4w% J$lCx*2xU02C$21M_iH'W/sL\x遘4H06`dL̥4ìeLG>z_rC+k쪗/!y P;L(mpn=R`ԹpW`h{x(KbrF zt~Miu܊LXsEzs`{"JЗиcR@ RFXwX4$HQe|8 m1vW?lsG{Y$y2_|!O ~n''>ถF/ EPiE7dIۑNB^HtvO#+``a&Wff0330Y'w%׫h2Ft*QqJHۗFT#*8NB0&#a,r5M幪D5UԗoU>Tz}$ů ͹EG}ݨ펯 \~,޼fIaس/L=޽*ԟm }ZYKĎ1 X }-c*lZ@h{W. H@820RVLDtids-JUhf{1 5A$3|.x7!BkSTILLU.0M FbegPJ%TZQcV *rWZR_"4s՛ȪG:1qQ#f z1G岳c"aQ B"XիUUfmPR+ѕTz;MEȢ:0٣zόj>3̨3j9X/[p`ŰܝlEwQUDnTЌĆbՌ41UB :PW.wU]d^˩ qV+V=5Axҹ`+a!>g#u|ƞlZf)[oI %=1$9)U޶e*JZ1U 17 :]kHOxNI䊺!Xm׊:IV5RKAt#U5dIuVԤꆟ_ƻ.o5>2o̘=gIAE4͢S.w|}edVIRs%: '#ԄojHˡ&lHV"gF^-w55aMMح0/^U2ud/!"{(|i$A_G͝sx$#`@GJn˄c3DoF±&ϱKStzTG&0uuAٴfFV?< XwdM] Pkn KGuos1Ppo9 1$ XyĈUFV~1yJ/,a 6\T41m)ʬxVΌ;3j̨3ڻ9/V[p$@\jֳnk!W㰀&$vɔ-YXuSB\XIRlKW P-i8T>~=r%>:}=(;Tnי7_ޝzv_P)^*M?gj0i|hnGzQ,JFFu:*℁e~8=HMղ/̭cR]Z7[cl~@ƴS8x-s9ȌGf82cƑ8p*lAcu&)ZQ׷/ BVR}DAU ,b85RcTt ;Aړ^uu6q칋,Wu="G=Xq*AñEQugQ)I,@cE2G<(犇ߠ$2Iz- oIپhi&!#\^8< DwTmȾ(7;۽WnhعZ118DAՠnl);G[k!^ ^1E9kIZsQ9xx?Ӿk9 ?_o^/8x@N^qTZ ꄂ|~ee?8+Erg-a;tJWV_4h=J:Kco`|%rJD"A-`G"&4ڻ}{ԛ+y(&FS< v<]X+Co+NG Vf|U{[{2To1z={<Q@'[]1FvpeΎ9&r=še'G/e6gJH sۂPEܫd/ 7cM۹i=qw__dv(qQJԕ|{2^Dh;iO1 ,Hsr$!BJkT]]yb"r uZ[3_Xg&XOJ0œNhl{iT:@:݉)g1"%r8ri`+ k)͖_H:~|m66d3Ir֐&qE|;Xњ3lom|ƛ݋^n #G~{gxj \D"AT>ndٕiɎ78, w(ˈݖ󱇺nGvE\_^ΩK_]t9}x 9x_cEU*)N.#ɵ.#L_7hݎ̨2Y#?_GHqn>L'@Ă\n/;:^?xZ3kD8D%l@!5t6SknjJqPzYmTOJu=T`Cgx n +{C{`foՒJQw,-8Da`o݊C.X'C#,1z'~݋17m/;>??.'xmn-#9RVڋg΢ϟUjuX@mtӰ 8T\%+Cfmw)Q)91Kͱ)]!?e5?OP^菔V.W`*kp ޥ6Z[n:Zdڝ+CFB:MKeqyђa MN9a0ݚ7u{9*Y@h6y^ǦeMt)%L*E-X(`Cvm%d;µ9 JIҚηЊ] 2ֳXw& O2$I@@0V'@5 v):įcWw9c^xJa?dx}ɓ WxϏud AKd SR59isBbz b Zs1ה{T2oHGjy]ގQA޳ZD3-Vu!* J8>X/-,d_SG.qaŗlU7IjBe?Nw|w/ݥaY pr\ۖ29nW9z``w {?Grw@4-+>E/P}^zi KTn`ɖmb|F6f05~a"ʱIQ )70kExbSvAEzjyhr%+v9YIU:96~'SEY/:7#Jj̧wggmcH ӎ%Iٳ+RmYIwUS/!B!Dꭩo1!Y .%1`(CP($r(k恤_@ըXED] t ];9ܺr9"_z(蛇Z_y$yCiY,F}ָu{5kQ 2wLwyEAˮ/y2eF%Q-D|h>FGFJ>ژ>wH%OG.G^#p,A@  @qqv$`|~l6W'9.(f z1G*R%NIJT擷pKΏ(P0[9'?e3 KVY9l&jSVsJ{c&JwCY+I~jGj] ٍ2"@|۫+Ykn+DZҏ3Lx@ 8kxvԥ~@ʼne9d1#Mpc,U/Ude{s cޛ|Ր#^QmϭqVUTZT|G miŽNf#-$EB'`-mA~չ2A%?ʪP`%xtKZ==PԱbL)'Y=Uhb$F:L)l#Dxv|ΡR-Ngrz(]S|Wb|7)uZuH27^eY),b8E b QG4\%h+^Ttݑ)YR̉NP $}gaI\_.EeSAd lj*1S\!Pz5۔UjG *l`cQɅ1_DߩW;E Do-Ƴu U7յw׍X睽N5\\ 8xDPCyR zUP䭢խY:r։}ϸ=3Q7x 5hdO0vO(l,3CkR ANRya]"9'=(XzhηɖFyqzNXTOr;aHyRO"^ҁ<u-,ǐD'扼O ;|@pkjStkk܊%tAjBk0HEHUWjl!jpE5WTq|3zvN@2JqXʠAl3V @% Bl*]Q`Ghʑ8V\T,9R9I.뤔3Wsm]go"TSgس )$f+}8$0A1ߩ d뭚g@D?\ 5oa AդsTj]E΢ԋ0&yg _&7k;T:竾'u)h1@LRm=djF8bIjUJ[JÐ\1;_S8u5P*S5P *nk$Q|zQ-IHR+u^*F9ZgwN# gYpJ:U TC+E_9 ;whw_0Cf*FPƺHJ ]%a`^ۨcTX>J*s7%b+Qt~nOp1ssxxn`8_3UbRdV1 dj 9j,pdq/6ڨFhmQǒ)XF cՇU̓ q7?5Tװ< :A^8U Xޢ(J9zI\F->$D*N!Ӫg{hUlVַn7Sn [on}&sfe}K8]&8_Tbc3ƍ2^&8:9ǭ5=7}SeEi y'3nCP~1˨ATao>G(k?D\K>Q[ Z_NyƭvOC%$w^ ߮i HD1GgW~qᖠVAoW NctyLy(e|wG//ir_8D'B s7r'X rli?n%___l 4G z`T]<5O|-vV9b$!.y|&9 emԽjJGqvB'Ly#:j^8"Gt Y:[>~&\^>~cq3ZxhWRf84}CESXZpqIQj`NiQEI;弨/qт);L}Ύtlݮmqugߧ>?AM=G[x2?H>ή/%X8v稸q0;qH[ 3X8oDq yc|?n 7ןԏ3[3S}/čj"4V:pXA\FTX.X5Ƈ+P!{vG mGʭE]/:A)'EC>i,pP}w)*V5^y8 Q rƲoDeuU;,uv۰:JpKDG=jUmP3)jµBlvs-zdž4Jh{8*5VIC%Ov8?y4RnEm`g7EJ?"~d ÁZL];smu")Cl4 rBPҪ'`S~n?-!v9h] +gB]u?gR=I\TK$PAQ]K_}jNJ\HD2ަkCXA=+[Bii#uGԁ505ݸ& T{T{GFӞ{tғ@֟$u~g5Å.yt~ G~ǣiƑ:; Q`Zva$zx%{:aN]Pæ-[?:EDxI{QFzL{4X5K٪ܪOs86&Q\q ft[`{|:/r^ # cx+Liw-@ed_ᠭ)o ˃u4JtvS-[<+vhѹ9px)Nì>Cs& (<ݔv&[g{9.Iڑ>1!,m*)#yD}JoE)^)\TT"'7V|qsn"tf/\s_T\b1.Qx>mNk  XMicy3 }}.U}:HֻUrq$[y>?rIgR:}L&XU #`-mLvN񌝪Z?rN>ܳiFgݚnI6F"@d%>.X{h7I;nI`ɖl^g]avD0]vc3enO[hUvo w:]dU/D{Q kc]Tc98H:GTI{3.W b>߮ږzaZEpctWF$4@ߩ{n)ޠ  Y/^b“0yΈ>V]F)27cc}PDlRRGWs]T$}-'h=Tɱ UgmZ9F知3o /C|ô\as%{&Z.;X0t[NӪ$G2pfLҲK*|՚NV<%xYup#Ok Htu ڌeRp4(ê%fѢHRBR/;?{V_$`q.χ͗~x٤/(ŅU' V$cf S,L!!ky'\PbW+f@@0+B 3d%!&G%#_)n˽E0Wn>T}ZEʔ(ji&IƍL!HʏzP}5AGV'^l9{Xs|` *eNv"D!9D5X}'WQN-:)lT ǔI"^$dQ(Pl+6T:5T薅ڏ,Թ"_SGXj7914{Y%xvsAIԜs钗ќsQ!̩yXx76]T2S ^~ly[aq[FA>h׈m,0c+sKI8ם"9*}p霞uK<魅<-YG~׈vwb\?j$5B=ނXSZGbq˩9@4ٱln~w i+AoW;c#yMN&Kj_=imM)ˁeOf'%/ucMٔ{3ht"~xώ+^ wR2"֌G& l0d yP&1rOGf4^A9F:̱q f7 x~0gP^s $K fhc{".z 2C[BEuC ckOM='N2g*,ކU.y!U*8Hp挹:UKUKN`t NuyQ-~ ʢ?tSsXCf&RXlQjdBd@ӆgfzo_~4' /cY~UDX[Od CK9i;]7]}7iΝuF:DG*XeQu8,qpDXeـoN͞N_.yGvU~]S(T SԵ|GOZqJ3 %Ė|wa#nurX*s\`/Bs*5~VsѪZ8 n)f+T}cpNČLTl7a/GzJ R#hpŸHr}Hb<$Fof)(NoP8V'M4 GO_O<aOˆF/u؟e)\w9 pjߍ[*~.?o#O,}.=o[+fx)zP`+'όY0?XW8u"gb*DbGTA`?P=j4rv+0z&7C 4ݠv{Y;S5:k^sk=y,fTnyceΨg{.qS kA,+7# ;,oB 2\]dKtQ7`ooxڷxڷxڷxS/#*a 6+EL qQW$o53[* 79o>ڇho8;AM;_`_լY]<-qxBهr4?p~Xqu'(`t ?|[n<$Fab2y~lbv>xcxdPR;º: jHh _oa6о&j;Y{nw\T5yU˫)]2o n0%VP%'0gQyb09|s[s&eչ;+ rVZh!'EPeboz2X S hYڇyJ(^msc0]80sAJV Zh#6q'}(]"ZRȹE%}P%?$zUEZo8ǀٓCDkr{}{ŴE+/0nOɧӊaU)(0E'<`L5QEƍu,.ٻ#)]jSPb %1B[sSB{Q2o ꑈ%_劄mlJMj{zE3E0i*1{ jdHrN51 ,H!J妘'k1PfCC$p}` ߔb%jP?eE Xj>vQ-:6C20 ĉsĠo*II[ }n4MѸ(zLAw\_u/o>T+fHfJDXe^+'3&zKeQjMjem$2\KXUr}:|eK]]d{6d+tn'?Vlu3(P*@NMDo,0) YJSkdJ'*S9<GokMm .y!m ٶJ%5%Advޓ#CcN~^gӇ=!Mȗ2!-yl[z)::B1fv H]LT&I*Q1(¡\R\_fY=h͛fnc/[7XU˥}>@l_"+[@%8qo=F]pxMKSoLVדIkScmj,{#RUIy`M5lh2 %U`|->H0sLjt Mi${2ZVL25RĢ(A\ Bb|vQ/7+}bSҏ#6ힸpKm7|>y™1T~[4$y-^m6TyR&S:D]J?Ti4IhK-*mKH`r01/v\ԥ\զL̔~rV^4|"bfsNI∏FǶSnf˄ rpY@LY@Vp"jACdk qPMs>y/mGv"Ŗ,j\'6&D$k&'X@ 2PKEIٚܔrnP yccLMᰴ,5NJHu' PK^;!) !ҽ(Qr.g /n1K^iLobxCk1\|h5O͡/j="'/1I(Ji?/V!8iA݊k0 E#!Q )ao&Hs:&vUΦ"n*?2(AeI!qƦ  |wɋF ,^3{O{Aݓ&kn7fҲ>kRͶ|<ߞ6j۵skހo-iqۈA Ջi ouoW}wTl-ӄ)RJ9Clh;ă{$Dy> D}v3lDQ%B`ڂuۃ{g([ҰۍOeOƞS T,Z_P*($1o2Jc s'8pĄXߛ&{@Q; ߻jt{]ύ,%ΙǷJr~d74@ ·8cxQ5,^7 :Vzx#wbR=qhuF_W5 Ȼ1.OpHbd4ʎ“[)Pcኆ-q؍4sVfH{l8x[qHFsO4|8$Oʌ1^Hk<癝Mb@Q2Qv?IEHu7P]vL 2;5z ycHVcQ䑨]v /ͧ;C혴6sʌ/zow{D\G|s'4%s7눋QHeccM]w3d+`ۥv!L@X*,2r9sDkǷ;34!Q;/3 CN[}\;Qv1mS Jab91RSD 19=$jd~%bw).Q%ƍl6j\٢CY9RqF7:^FIwƊ%IAn+_6xHd5V|CC:='Fo!U"D؊y b׉&jUJ˛5MARK:Ûj6A64?&}fi2| &MED|ޙ U@tuE5fR1MYVe eMk,r:4j.{m ەzlpGLsŊ8 R^eH)ݶsNsITX9 rԙevhrgXȜ<6եsԙ>kl]^ǽZv}.p43qb%*xɸ\fOl q~:V-{ïtuꬼg3 $<;32 %ZQ9jV,.`+STֹ 6$ z 9ym *_ESx;+,%C*3{@=teJlr&M:K{F!g2A1<}OΫB oJ!zP^R/.x^QYN*dVl!.[EYEU׌.d6]!|A1"We rmetɋѩo%bF.9 ZP]䩉ɭ|iKo.lM.#P>S;ﭦ>e3W'85Ntמjc[|Eo-@hAnA]cw`$ֶw&.X5N!?jM`sX&?rq7kupA|ϔm|(7ZyɑE"k9gډ'Xh! ͊Uy]X`݃ZX*Ea If[fHFю1ןOLi|t!fO/Lըs:QL&|9 ɒ㚨];S}vfpH.)0@^y:B_L !rn:BXTJpf`E*^X9@YDu7+\v;I㢨AQ;q1=^) sZ6͔+NA$jHZ{?s&6!QDZFfvI6DH[GB\+@]vDvډ: 9耮欰:S@6Q%j4 DXf 6w֒6Bv->|iIxЯjt{khPPY[}?`˲f-㔕3̮C">9eiU A(*:QϡUI?gnF˪IX),xt9:?_a JLzL<‚Brϓ?5j2~1F ت?=6q2DdK}IOUN<3*NRe: ʢ%*lQ)>z6ݭ4j멥F͸H +gCE-b%rbݷ%E ZŽ*)jw쥺>ͩؔ8g-R1q Y@8t:i?ЁCWN⸀Ҹjmjׄ&`\82,幜%28߁c!znl"goPQ:3Qiᕱu]*2TP兢rSjWIP=h֖NĐSLŔ \sLXn] $~`d%Vي&VZ=Q]V"y8xVb{Vרޞʾ.:e__WUu%;FTu=9E=1ը>G+Lk-sOZt"W^]]R+9BۻYo^:\Ě&wQ#(?Ne6Cahx/FYi}5[c'OUx?DBG*1a膶JF7[t@/ˀmVeCBJؑtQG>CSo<_5XNl7!{c"}ʠE.qm /oA?8*q>_|:)jwuta'9{!Q:2cgb56rI{T:Z}Dqb/nz&>s m0 5 syܽ^1FӛcD*rP_/C/5bwjk}0I؟>ӑnu㈵J*ae;*x@?>u=Ap" xYf1JѡV,v%9՚댏o6 =<^-[A\u>Ps}. ̍&I|ITr5mّ\_\ $?+Fòi>BڰY[h и8}F7*{?SO 'ĄN'ZlfhT>3{-+Ⱦ~= K6w/%}?eX@ +g@v` ٹӷt!p__&r` MXzTo9qw`MZTɶ_ pY!Weɨ߱,vh\3qۦtRQǼ}R vRJWvVE3]l5:4kO+hmJ94k"X,J/}Xa&0oL'XkHn }%uxZe^Y7Ļ}?w%ͻh8.vQY-sREvҋI) +Y=8eg&P7o&Do?g}C$o&)O@8x_ҷ> Ȑ\tK:cڒdn.hZXF/VL÷ wt}TU_ㄜO5AՂX3 NԅdR(|'C+ѻ}(iչG+-hNx`:EEǧMf$~R܍z oƿfb/؇ -~ `҃;ѢUƧJ)Ľl͝njł `]DvH[UM[I\<61+^ !V> mbro]%J^p@Dw'}7^F:YU7lî%%upQ/'*nݷ7viV=Ac;XʻcSHiŻi,W0yYw%_=-G/^I'io28gȨJr~gnJq]F:~p:SF4Fy$3&k,\@pS0/Bm$&J]1[T~E)Cr%i׷Yq!?Lue]?mǤ8ж&&UlK09uxN@g%TX4UӁ~:o7К(9{PTv,ˌa`XL*e|h4RKu z=&g?Ҽ 2`2BF6ol\x4ˮn&j8=J%o:ǧ_A~:<KFWIq)7kq^*wU +Xr 2ρ8?z(KdkO>fL>)>2!scWڃ&mk2Fe?B Qrvr%wIM|gj9 &w,cmsr^5>z1ZmżNXrǖB6l8Kޝ<1i_tʪ/G Yػ$m^-/ѳedk˖:lۯ<}bj)_TM.>8y7{ v]pdV|yuYv<h4ޯ2qNxQ7|+,b:^vzsfc)ueP+0uWTEJiLRϙֳƽ^5Iߙ?ޕ>qc/GZ}*bOvf֮8Tm%B9H8[&1eID7{x*zXM횙\lKj_}xr|.C?L -}3F! f/4o3r0W(X5o8%֯gS/&-K>EVGۢΩZͅJTg ƃAN?TrV۫F( Fyݼլl=Q!4Sɀ`pdB[OԚۨꁼvn@ huh-Fvݫ@W_W;p?= =KTkIFq>` %tMǛ5w9s_*o W/O___gLa+s rLqᨄu;Vop`R R{-DEqɬ1D(e\8̰Xx&b͇4j|?]=ۺQDbuE|#mׅ F덁ޫ*jܗNa|\>3sjQѣLjͣ\Ia:ZimlhH,TZ~64Z O?ߵ7g֏umz LÄ`60eVժF'u,[?*pd;)Hh (A$ZM"q{Ȕغr=;^ vxc1s%% <n607{9O~iGҋؽWbsѯYgu'Rr8;[XPb46@ SBsy mnwYTCb]d[MͭnmS&#*ݚhݕr)[˭N"[ݲcf0'y!:b2Zr*S-㼑L=17.ZP*fIo?Q%Bj V8pc~/@?VfucvvRkF3F {,OIc ĔZ{'gơ}xUAujvW{U~$C,quz _fUQE gUɑU]D 9V,wOWW9]U9j ` BCxgw⽥:_/Q]RTJ>_OX߬U7Jۻ~C/1 ;_f> 7Krq5\qVr=N'ۅ:R#{{t:0z:sШ O4B9@YYt'G$udLp&1ī ^2gJ5ErrieL!I_ȂSH RNl!jTj6/'zgWs44[˛5w;POɰm)ds"0ߴ4 %,xG_ ERؽ# 4 9DŽ-u$Qh4;վd;zF9`sjHnAF6daYbVh ;-R x?`nBE&VH14ZXe ΂zbݩO?E~}B驶+^< yʄ~7,gQ{kejK҆` # ‰RXTڹbƄxũt:F%hcC,a Jm$u#Ē#xXFfhMhx2`e=¡.GvWT -G%j*A%@ 0&2DgR5D^qUq)ΰ'v?CBN9 2q f%*(t F=H_/uz)RO"n w2M ^FPCB*Kp\-ʖ(V*ik-3 ,VOh&YUDEz8}r&x/"xM1A@E VzZiЁWMGs/]0%) rSav(`),EY0$ZcuM /=7U8.$vl4i{" t-(#uĤe(*|#og@s "* z2 Fx*Ye8SnIS0)9`d98(:X6PנM.#!N-\p˂Si݉+)ʭ9Âe&dhWtCqֽ :Po849`S,1ڑ[uNJnF3`a%k"H(gQX <;h";8gUBC ̀QDqȽl x K(A\K{jk]A"Ju[RA=$r{Q/E=Om~<;_EnޖU &?o{(_,1F ސ7\5[)rcᯯ dq;8dln;__zUi>34_\39hջOq)"\azR ݫ:m;qN8\CSH팇7ljX!0ӢA:/~Ҽ$-X`$F Jd/ūu!yYDi5R0RىFT9**$R$ LbJ5G="`EtGIu֎ oy80f$(šzO bN1 aS`hUĆ!oVh5WcW3H1J"MW$CD-}?4Vp&0YrBC7͘ <1)_Lj(Ŏw(Ͽq%l\$LF=E"[ S RrL,Í|Sbe ~P ~P77Ԛ 3RRSq/0klYbH(Ly,0Tfs;ʪ ̬kZMH㑶oƓq@,n̋ۉ_or X& 7_ɔ'9b)WLY5L?j~0idTR)< x $ںRaBQ.?,}.8[8oBI-G:qJ:UjG?5 PB!~:_ZQBӋC]ct*^QXq$|颍 DqIk,9F_fu= H&bVUaMi@uC#)4 a/C)Iªy2%FiG-m_[wM!Ζ>ΝP9(z: D5=Ve$SQO.EF: zqzRcmH JT 0&Ķ)tx@%fw` LvCle([ekfwNP2ga4.NM1K%-8FRW V&gwvݑެR_;'CewV| /mG+]!6cUw\͒tg\;B˔ԊzU$yC #JsM=A5#=\'>[AkS`_fr]!E+ te0Vѯ{L PF+%a3v;cO=kJ_}YqW,_5Ue<>7c帡}q|oǟnA}>I.xI,HP'!@u5ӯCOSU^m~uS'NfYfv_\jmȃ:LMÏ%ϖw3A.HP`Ҡj$pɷmdf_7=#SVb2e+Up"U5rD RH3c\)ZBRt*=p$P 5+ONt%aO| pV|4nZ`'H>s$P$bkז'[$ku"f첺D5Ք vK㡿qϣظi! VAH٬:K.==绣^͠3NardeiBFMZԽ e+kH$3bTP=St}d>mJw/PW6J F;ZIN#'[ci uiCEz@=&)* a/ɧx@!|JW;^NOnzMN(q?; \VQw2Н i[>~[duZ*Ӊu4(l+W=YS E œhHhLU_TxEaST WT"f8rq"ҎG/uPDOU;&Uajw0:O=@pJJNoWO&fu~9y95CS h|麚btX@ :\oyc6u|t@ʌypO3;ujB'*> &W8v\~ZkiN?%Xt^+`91v8wǧ?O5Oe?~:9uON2O;ѓ9d-O$i&'#{_$\3\7 J0]0^/;N/og@! O^^~gHr_?|[R^d܋ગ4ފ ǯ/hlh7>:ht27r BXN)ٷj6gpMO[p`D8^'HZե2N0}opL x?VE hзW/ XXO OsA{w˯/i-P3ĞKeWg4|5z?XP(\t:`eSkΤ+_8gS֤42HKs>N.Fǯg_@5'SSVfq 2vb7ft6::z.Yjg߿h(/1Ő9ŕiy$=ߣrj;PJJB?BKQujaI|!l`~RZrXeijv) YaðC7;OMp,.[V]QV`zt;*nTʞ3FP4p81KLI>B@tp=2#wsà DiKjz ^|X'}pUnu뒊6ؑӃ//?H&_/F_~]O#Bw^"/886# 22"(euܪ{3|E;t$7 ʗ"/"AsU_FRq"<[_c6_K|Zoc6a0B"Xidup) ?._"]p%o%SYUoBB]vն/k]FLGd2g \SS>m%{D^?L3V;#N e1zd$w}}M³%K z7Sl2W!_}uhR6uE^҇!%}yIyIכj9h)>TKdJD q"W.qBDMGaf/ZC=RYvb'q\L#MB` P 5^ДM8 G(fp#)1kJ<C.#ěʲ%EdUss{>W!mGJi+R$؈%DkҠJAv)B4F:# J <ZDm5kZX,j$Ei7))Dۤ11H䔵0 ,m0$ؘH[Bc>ڙ;F^) v@KDBz-E;R$y>?U)*3[JMSqdp$,Q) E'5!c,$z?y^}$(qPtAt~w]fitBࡔ^,FzB3@s9ylqQħ_݋#(3w޾9IOx3QP/f`_{gFs`[q œb`{ȣlE1t,+B$&V p4&ҪpϢN[DTDnr+\%Wz_3ys$W#?(d_!Hr]m7#E`Co;zV D>$*FIlQ"] u}`v1k9LYijyDƄ0B&$7$zD$Y'Hi<|J0Xp?J 6HƌGD2 5G$,wOU/BrѢC -)(zN"dL0 +ZhE)]DA#XC0(Np+%@u` :JhE5u*`d)Lk08K>$nDEE1~>Qn~e4TB|$2()؀P*tKUM0"E3laͼ?LZ{\/ `yLD,-THli0AM VN#0\Qx=LW-SID$! ӵ7aZ S y:E-ZC'<`⊂G85 [kqNT֑y*H x%pkKޜeCrrq`^oS&wҠSJ 'S4'?/H6e ǺE*]Xh>*xtA`J2-%JHm[жlLILbLoC(Cr&c1L^4E`x$Oh |W|h](DD)$kHAт 亂$mɘ4d{;]k$TH na qvy;g{uŬD yIn|CT;ר}˰GkZ͒{1'ϸ9u+nev?_ۯy9<W}tWnpg#價]L;h({ΝX.KĜgHÄ~!Q։7V `BQkLnDWI_,iЪDH[ꅝ P uisgoOP3~8tZh3R,i8kj"ଋXX^]-D(y ?ZmU[\ B/9Ìe05XǴxP)v[#A䲕WìՖ pИƵyE9HȯVK1٣m\}iaWD!?@Xtﶍnpݕd M`w=yr[!p姸_zPP]{Je:O,sqُ0 ni ӽ*hqYKJg;½"4(GEÕ$0p{Ag{ÕkWV]3# &$gp/VYH˷c0{6hr嬄+=Z74GA|2'|}4,8Neϖ:YQVP`OMFag&OİoiZ73Aza2"kv2ju_K~~"9β%R%PVa"+0J'6xǩJH,)K-QNb\jپ2gJ*w ܔM )$oۣĪT~وwXstga`EYцj59҃1{;.ߤ>̂D!* D&hIЦ}Y_\~F]$loغd,VMj{բk6#q kl5HAZj*p~5|{˹+lbE'9:W[iΫϠo][h҈ c{F%L'1)SA-2wŖFL%)m Q$w >1V4$/B?k` 4U< Cx NG"cTi)E(7FV ʭrV9D Mn%&'>I4O-a5TgGBqCFY!rMFEȗ#% ${8@/lk-KZI7^oqFhfYq^h$*VSoٚ1ϱEsThv?xNI;&Y?!z3_1إ ^pӪS7@OWCrXEvr[?Ӱ5m |DG% s<+)F2Dý Ք1d(q7;LPK: =4}YRv`K|nc-]Tȹ~wB{\O!NYSgרѺmΓO/Ҡzoyv2"70;o\^=۳'.H"Z6MS7o @`zLwBhKC+baVb*Kk2vUͽA`ϳ'XM,+ճ4RT0$E9уQڲ4p MT=y=Ktk D/ `\CkTc#u#TTyؿm1wG_g/pȌ*5ۅXΛʱsW[EO`ƎOP/Hr)e$ YPTJK{:MzxJBy2f)qdfa3sKL#ɐBK)Zi숿um]Tj/BgdD;¶IbABZ"+iM܃z=7=ǯaxSuRC kXu,uxFj)'쯱? /~YthJL.Άuk^úͳa?ō>DU+D$+>s.nS͊ς(5y%Άz'"/xQƃ'uhp9M 878MP"#J9c`&XQPqd$3UGDbR ݞ&%aw7YF_ V,]ZNUs"1#pp٠>cӾ>JBEPlΤⴈ~|0T;؇9@cc2o)EPB0C18]H2STvAqJ n!K5gd gӵ]Dmt/L7O4mT. a`}SSѴ-E87U쨎׽ 79㼖<:Vv,i`Dq)y^^9eʥ/C LQˀk/ª )m(ŕȤ "/#:|J`XYc"%,uqGPET{CSb[l#)BJv;DIbazKhzN*6(tׯPWm";-VIM9wSbERDÒiסT\d-]句ShO*ECһBQ0 ݪ@!jzub%TP֤9gDsz`cj;D KPm;ζJcr!bXǗ!x }Y Ao|NwiѢ :8ZYcj(k+Oe#xl-)UϽJG RҢ p2e1 uxsu8(5TiT*jt5 Q\Q! QSTPL)̨lz'8gz'㢰g566v5_f/6WÝ%b6FP9}։=k+V"h&+Q^Y/y=^sf°1Nxhl:H)pe?4u'[[\omM¹8LFR?ϑԤ}(嚤XKxU!x`ȃՍlС!qǖ35V9%K`dF)5glWcT$C%1Rw>pMj3arILI$`<5YQ6?|F`cBGXcNRoD.X c ׍cʧY:yÙ^&#HLgF2 mWM ;ļ*T8\#Ct˺?TJOB#@StQ9+-{ X/GvNA⬊ rg1D(td}K o)Jt[ʷ[?oieb/gE<-P6Yy~8 #m,rCvYLTgiN_ +n\@%33"Lr%6 kN'őjOi`ځ}<L@n~4*zr, Ηו`}A P=_j^i(]?zԞ ^獄Wp9};6j 02M7( _Nu#Ƒyʜ18bF[P^wC%}-7IkIY.fSkۤY.MjILC2jB] #h^{C ޘ^C9f|z%Gܕa3^#[ۋku  kϧ,pwԭPŠoOk n>\ 'Feۆ6+  Ҭ?(mcq_o# cUYO4Sڷn nJv9v5^9lgZkxhr!p-ѩvzCb֭)m#Ǻ}E[9iWuBC*ZSqg ֭)m#ǺͷJ2к4պ !pSj%`ܴ6mwMZbZk]k*v *yS eLۊ"`^w{Azz>砚0,׌[b8mkj%qɣFI`2i[/NyÑ2kl][Y4OW/lM>ؙ?&v&=|,s@20< k9di7\NvZ~,=2 #m,r3iznή=ёH E8]ݠѝ&pzlj2Ό__9[Y{{{bt4hH5@jjzT%nk"9f *c%L_d)3x_5}˜sEv,FaGW/~iyi= q.L y. *}^%@5N ;<~y fWs N\w#;Ϗʹ, %BΤm}k ՝9$Vz$Iw~qVaw%yo0Z\E {w72]{ߛ_{^ҲhI?)gKzIzӡ%cɸ%sQm\*Kݎ\k{Kv:4/\ҏy=y+ˈ7}^REB^@=}o1KM.ic82BsqWKunr*§~gN ~AwφPX!BsZ,$l ^,@GM 2,9#`Lq٘xcج"8Kn4&ZqĈJE-2&Z|CbOz}cLpe-ŘiF4s\ -`Z6ֆ?R MlU1U@C0vM`NsTK84'q1EI%$P3pfqF9g vZkw0kՂyy^4y?K'=+i  5hckbDx1A[' FD*IHZ\ҩH͘QYR6#,j!(N3*BTS:z+tyLj2H-UVVX {ՍlС!ѱr6*4Ra/.Чխ%lǎ~ǝ19_ʏ֭zw{{mh٬1k߻Ɏ>%R0U76~m~.Ͼ?k/~^V4C!Ifalw"yP&DXZ>sϔba $#g-H(ΕC8 A uJDjD2:D%ąlmp< ,Fu;v0[C 'R5M;b{ i@9|ǠfUHp8kZ[3DY 7w¥V(A/g0JP,܁ l@tAÃM~N6ӤmE FM9or}<~wJ6'v]}SCh9?I{R0x'I;J@_w6yDl!vʄLiz2i$s>f!>U _m/!ѳЖ`GYMцı`O2}oܻp_Oo+Aqh;d|#iۤ;y?Ffܣ̾pwϼn+=lK\Iy\N),")lzٲW%ӅHG=XN}VW)NgHRW)SGrGjQ%x2BL&*ũ%k}g:K$m޸.`+ʵ2*z/:cW}FߙX;_8Nwk]:O@*' Y47%{@|ʇՌq`|$C.gַm+uMfRBƗy=/~Q2,ڔ%pq \h~!pvrNS__FvQz &$om#+(}ԊԼݥl%$vy|T2Ejps"IIp,1LwS,o[1q:'zĎcH`_WBoɃH E4 YrnG T6GWX`RL6X;ZI P.W]3ΎYXY1g0#ɦ׸f͘iY0#Lh@VF coZڃ`FP~|=@HjMr{X X$FvP/,98%!95/i}K6 lbΦgfc+fc]|(c1˱"x]˙Wdf ]rzNAUѪCw"꣣}!JRD?0&M o -G}~BjEdV9ɠN<5~xy4&87i; ?ЖYx]b Vi+1C^Z,p-MqƂqiDjަXЗ^t"V[VMF 1Xqt 43]kg9B;MuG7N8BCh3y a4ԏ[G}P/~V GeוrR?1p*Ik1E&tDR}<.# H -aq"J*DL,;w)We7z˄TB)8J>.鍮 s}-v/幪TE9.YX -D!",sPX( dpΜ''""F x3 ]A xJ] }zLRLh8|oRO? +I K@bsS $k(㊲ !Q'yNкTV]Θ~r!KPy< !.Tǰi UBqX3bkH F**t7mC~ s䌬|uCMA&x5iE͏OJP1&*ҕto IMx:nB. ^iS.Ց̓"_O%0Q)V|q5~uEIz[;KJ΂FG/QO )?B'L(O2,7 ]MXpp-y{2|i$I'%J)a/oQ$!ˁ7aKg4{)0 l=_ U6UX EOWkă"Ӯ`c[seIիz:oYyc՗L:a}f~1shK<\mt|P v>:\`yu9Pt.9舠NH:M$$BROj AcRcWS%⺃֑Ɩ uLeY3Ƿ/ه#]53oB2O†?gD@}XLb"V{uYCUp+5З՛vzWW`Bb*Ks1C.J"kYغmyB, żRZ I{yn3 <XcIR# l JI/i$Lw*fZĊX=K % S d/a&bXa5e{NNʝ==a5Z85Q|ЩT<Į -[_KTV-գZ7%tٻl#CJb?<*-ab]DMum×i+sUJ]z=ISMa(#$%?+#tCR(XuC=CVN%;~ρ݆ BFvk$u.jcHDF؜;|J p{!WY0&.aܓ3զP+~6K8?'P K ;b8œp 610$%p]UK_uq.*_%<u_ SޚO$Sb2>" ݤwuN0.)\G|5ڑ aY/r}3. tb&gN鄙&_̼T?C>׽.VűQ#]ʴk(z_a_rG?C?df.g^o|KzI yj^~_8tH}ė?e~[ɝ݅O> |LK6w?['_3)m> ѱ}F:L JDDJNˤ"YHa ˭VJ"TN2A`2y;lzٗT~9?ӿ j ls>(Ƙj^.?v6/+×q45˹ч{˷~שPԥT~yag 9w.fTJKZ&K0Ky4!kVnMh4%$uϺTAT"CmN:U$Hlj6 TP˩8hu˩8%r-ƹa-vFbSrd+$a!g4}2J`!\ Cƫl#3 WA\ɝVO-'% fdB93P}Nji6jiGp٭ ڰ1Y/N76]hHvƢ)Kn0ՐԆ`b71$ THw AkSI1!*18*zjQLy?} ~*`^c絵U>gGϓW:bھk(cvW4Zg<}TC _wIcys@P bj^V'DemAT)$yW trந =Za?^hL?^%I ,Gm8X B!pDOaRbYvx%ќPw$]cNƔI+ڮ1'\cp8MNB\oMBex[zdZ._E^7V-O"{iUfr3SyT*DVxٛџ)h60 "o=ñ(? 9G]U7 йћ]8{ݯf0+Aqc:u Tn< 'I+fSMNmX\ip!ӄ҂ň%Ղ/րŝYg$s\t4-X$8jL(-ebZ'3;?'Qqzkh-8ZyW>VӔ-r~^rh'a*+o}H?~Hf?Rl$ p4QF(]M2@#\GRl`͚^:Np'ВAKr]1;~Kc-=6!a0@eBZXc03A1!bbQylv+oNLiQd`U*p>ʐHp~ù`Bb |8b\D2nmyB,܃yDlJL/E5C,u8*VɏЌP19Ay2ЌPBK+rfkX)S'5V,VΘkFP.?Ԭ+nϚsLa G\`!vTKiLb%E4'\ cyYK!Ay>kIrN%}~2F0$E-ę}ό+;|"q4&r~bn/~̃`*jmc,wzn'[i64Wsc*^ ZKU>q3JT%JYUXB9 H3J2+|ƆuSTuʃ"Su2T,iVТ֭ y"ZIݎlZ7)9X<(":U(cNgJ(;uLhQV Uȣ Ԩj=:J#~wtnwchqd>*R)SEcѻ4Yd J5B,ϔ+d"sb5ˉ@N9hqohZ&wnJDjqPJH)rcA wpD'qb k–OR)[(¸ @0Ng)23? Upx%9J ̤i@{Vh(YR4VjfQ:e3 :Ͱ@(-b\F\&ۉ^CDR7qu|/  G, ϲrm<70v?J9GDr&ZH'zH͗D>Zt3h?]\Ku L #%W>B-QKZD4{h\sRmǘ*&jPΒ$)Mڋ`ndFʹuǂr,:]Y ldf\$)O3cd4SfB9hͩHi3-%˜:hǀ,fN?Txk3c 53>6%|K?rzK܈hJ-)DT}G`GR!Fp#1ow7JxGzT.6-r]wJW qj%+]/_+}/uU;iQUNݼSn/Lq2q $>ڟlNV[-v.':mհ AO7( LmER]lMB-3&F癥sNɛA#)nC5L=[yX/,צ#b^NќyGD-SADE1|u.zf!kv)'y7p"YЇqKd!U?w7 S kۤ/OXYDIu^(F1\%rc7=RB!׽v|^(sF>y!cȧaބqW}حq'PDiD<*,[kM'LJD et1Y/m1n yq*>'(nqA  bI_|aĦSng8+HI`E׼3 S8itN!F1$ѽ~hX6AUq69uA\zPqycN*/_QBhԓ: 1 |B"t#'9{ "cqtFKxী FzMloX|||bbaX2O 'qr|6߬f,l  K"E+Uj 68aᥭQuD8gmk+CK$&0"!ogcǗ: U<$h* 8EH})Jr pÍ~ߚJ%H];At4_DJ }U͇dʘGs1it=jU[&ȓcruRl8R|>T%Y(*3)HR 0p۹.*9z *$_xƄH猭|ebwH"2()HV;|{EHoU[RH/ xفyC{}ܬe/ QN_KEJqAo=cJ% ㊢d~a$WZAeRtɸ$# ƙ$͓պ0nEɔP\51<QQȃjՂ| CĮ =$4ja+ l+: QkJ%^>DQg~^pg-{h*)"&#UimYREQA{0[wX0Tܭ/I蒩XUm_U㷩1*28QXr,'A"vۉ;'N܉ 7kpOE :Zw #uq  xl^3f,>VmpϱP](:<>Ά[nbqJeW̷ŇYb\MF?TaA^h>c[M t^ $qR[uvީWRA1Jl7lztK::=#vsAi8\wG對nW!?9Iin1 *Ɨ=>5ѧuQځ\7KA>0b!(j U,LdbOEDlYy^`41׌|8.U;l>} ڔ~#کl yBzm\bF|xN}^'#1yFkk\`Keq8X|ѰԭlSQI)Gxp|YP J萴O h3Rmuu)!,r=U:UX }$=s*yg&g.R#l eG#R84d< q)}yuUםA'*FS;itZjSB˩9 J}`]WuFSD@D:rrNAL_KԘ.k-Ǣauw$]}aALܖ`2nqmβ?~O\'^q 8&<9"xσ"I*ۆ=H2cqd狅mP׋f/e- ,_D%RQp 1S\͍ћ%eeHIbXmfw-y6JEorTW u(3$Յc)K0V%vg>{qxPTC.47C Pspp"h$O;R( xlRxf1s$R]nw%HDj7,¦鱱Fx#REG"=QjCf/3gR;C ;8pa(ەwy6 A`ůDn<>o-;RƍCEzn_wMؕ*֑˪_`J=1=w0NbEKT9$ܣ_ERajhcx+[mBq+"DϡUG9L4@ոtٍ;pIݎ )hQ]dN6UF:";tS0k,\&Mt.%!dmX&I~W!$c FxJ3ls= L#d]}jwEx91Ieq7KPbi_{1UpN5{loEl7 f+qo7կoOdW? \8V_,/Zj'|?闗~^a$?ܿm|6nuͯYn>DIFoh<t_f6e89"Co Nuzd%ywu,ڞx{VvzT] ?vvdsJwE0L8pU(13>Qi|}s}8 uA2^^~AνAQ)FLy2Ȓ΋[0]SAU> v UD(/2n / &3Pk`ZhA ];Z`>|ʲ3J?,!&d$83S22\c:'hfP`.8$WB!1u`G XBH`^Nt 7W?r "H "Y"j`O  [9 0x7 ^pm^Qeq( t$4 4"qQ.] e>ݴRHt7u20߄\- W/׶`6 x 0FّR.xKVaӆzFb׫FϣJ ~/Byp:^=;P 5dgYTSQgUmPvh^%ýXr9jzr1*鏋`n&Nd`Mtr"=7* 77]_Ɯn:Y9 ̭O'_[ਊ_XBuz+&O?sMY0Ց_V8ͩ} yea#;07wJSDZy%e㪑x[BYofX">M{6^p->~J=q{X}T5 jGH'RvAPg-nƫMoGcV7ކ}a|͠,UuI滛| iFWJJӼ-۰Ҿ4Ŕ&fTTE߲Ԋm`N8$^]qҕ]1gcp6(V8\a,zgnrvrxɘ(!6BWQњo_e0Y<%p}iqoŜ튘ӆyufWx1tl`*Ͻm*M)ZMXD0׿Wzrpg[<""j#8\}4ΐ'ґ$=sZwXF/1#2U1b1gI6=0I;>zIqdz^EpA+n49lǪJm`ʅ{P9y|yP> *}/XMp+[5Dz6ùS ^-^V ?j #N[RtmEV~j!V!][i/󭵕4TW{~iд MOR׷b4`yNDv+Ï iN0*U`*WF&R:=sO4z'59NW5yhT4zCFW;vqLN5ݹ#M4J1eg^]>׹|H@ً)ֳ\3S/Pv!Rص J6mxN6VB$ x.DLYzLv):IXJiEhsrg=sxSz0 Ѱ|tMo~y{;+|rOr3SE,Psˬ 6LL"9*BNr&AHTRIV ωw<9gP\o`p.t FgJ놺07~#P>ScWqD?pź%qILW򻓄/KBg(G{;8`6G2g8KTʵ_FsqV,"[9rńa &bɴ\?ww ]/-4k32]e>1DG0_ċAI7);b֣\̭[)j9 rq\)_@`qyqz kӏ\->~bjp7J?`HJcW¥iW`a,Iyj_o ߄-ao$=\sg  dD X 'PA<3Ľ+aj,Ar@ 1<aL i%P]ÂpD*mΝWf&̸ߖyeˮ̾j=z T=_~zu3\R*w_ۏl/A$^ߜ "bȨ7 ޽}9#9/{yeWl£B/߹2?5;ӵ,' E[GorOX! ˕ cU3oB $K3B݇WK-r>(f J|Ka6;n\a:ׇq|;MW~0.W'>[ rXQza^ |џ-eh(߭ΪnՏN`[M{JwB J+|*W;' !C4 SfݲK܇S-9UjCL$rHmc-e!vC;ѐ+u8ƹ @x1VQ/~wHvo9ՔۉJ껯mHa{8!qmXYOjTb C `!;UP:hGm0Z)d*T3"y;hEa&[Bޔ0^hImK)S-7TLV:5xH5g@1eʕAJ<;GxbԩL>S&!Jʾ&MVѺ :sԱnK(N+[m[lFS[y”`O@ :sԱnGLAK۶nٌZ&!Ǭeݲ_fP-fKLjy`-8K$DcGF {f(H!CƐDK!p0?'4MK0Z`c qgwqL=r,6rQ{p_lQ+푎sebs㺏h$ī3ם0>u7gLa"{I`pre9jYES]bhuk!YuS-W)ɝu;f!tlU-TB9D30%Qz[fNkQ-2M glLy&^UH\prYcM8jUz4x%P 5)u!HB 7ʸz=B.R-ŶC v2稵zU:״[T~7uSܲnWc͔AY9j=b!2hq![T+ø\]`?/q &b~y3n>? [i7ݘdz'3;ԧJR_9kJQ̧])Ohy-U̧])Ohy6|e1&BOh1S<%P$Y@8.`!^iŌj \=CM?J0v\-oV]n:Z_{;NƑPh6vElBs:3紿jz)G:5qbn$cA?y/qEt˦ pDµ"O^ssgĞ0ST[{Q(2``|@s7&lrl$4R`Dp&#Pl $a> *k=H}̅1ǜM#±OuԽ!Hpif/dw` 7M:vInzyS2nfRaϣ%Q"&#lɈP-"!hb[hsss}mЀ0T)FQG.f 1P' i*m|3h"IƳ5 o18 A)q9u}1Y,x&/i!V^͊U2c DED2?H88J81UL>cX)e&w 8bHK8B"qaHZO،L4 Aii2p{Pfrմ:GM6X@E)/Wl7LПl$;ўidwtߟ]ۉWoUO@"YBS31f)Rq3 808&MJr UfR%և-F u齽fm(fh^>)Nqwv}.C9-1n^c"VEPe7A"~ N?.npRhm9[s}'Ze.°T'hju8$NpfGqyuvF 5+}h;U?:x -W"$|dbAcNKXuSSkaaN|އfE= Ia fmgÜ|d>RpN ]m(l?(KDt{n9_^Z7M,SX;2,Hw UA:C"Ow%"Oe0`Nsit(VO ط<:'w45CG9%|hYAF@PzpLL<^Ipa pC%KޙÄslz Q8Ll fH@"Rs &\` B`jyf=!+Ϛ:$:HLx8S)1CN䘡bz6<%X I$u(yO 腒dtlCY$ຍsd/?GRW Pn]j{ m| ޼6kM HJMH:ѣϗ*Y;^ZI$T@»$N %5%?"bs䓽FuA:icX:7 u"*`) [/=z|!Snu2]QNK ~ %?ΈMYE|Ԣ! #:_CBb.a!v+/,n᠐ E}**a`KP]a`KP]7޾u"*`7K:D\:C I:^ Mh!`b\ SR LZ N%Ow2" F]88c BPMu5yZ)/9>7j7mȁ^Y( 1XTΓzdc¸]v.o{M\M(QēHtwyby1q ti&$Qp،N7(b|j=!mo MC&,A-Mw=կ$邪36c5EW~BZIe]t po>1J~<<~Iv~_T:=x J8LcHfF 4L(2㹢3[Ke=\-ziJ55b!\Ejp$5h`)嚧V,T%;x~xSKkrxkq^*ڵ|XCtlAǖ:|P.[-F6Ijg ?^]R$ }m͝S`)xꈽFЗ]Jde1L$4A8ĭL3`jId&$LA ;<6Cz5*~5Y['qUpzZ^5%C ]@F%H@fTh5K,3cU3rSA8K}$H7BMIL=n-j$<5E>DhN3_<-lYY[}۸( ^7Oҧṵu¼Py[dΌd;WV+ۯ4W @,(*G?sevf1S2¨wMogjc2fwOyNbn\HYiۿaǔ&Flx=s1t_1崜۰S+4(r4ؽQ%Fq6Q|J~yV?5jNE|9y3Ĭ&v^YK3ZS~{G?]=">m~j?tuc|B5bčL'FmYͳ(o=}ّw]}A^V-6ɮW 8P#,{ XfV_ "#((ƩH5Ը UA+޼wz45Hlj],s1EY\ߩʊBQL @h運p%O`i`-#A`+129) m7u?OG@lq4 7f#sRS{7zdQ<[-̈́C=r͍qb_~}CT0bIRak^ƽmԱ`?$3O 8mz#[PzL RvKl. o즍Zn5u6קƵ?Ջ4қ1J ADage.]ƇT!;vv+1F{e5\%OG1m!SQ.⑫pZ6>|z 9>;l\v{fSx~t?>†/zOI/%=pSp[k3n7 !7ˏ}}Cw@qb>\ӭX=r%4!DutPfkFjJ8u^W.#m3i\.m[mۍW'/`7svy%34"AQPڨ+$ƈaXJdi ҿ:`#)GiݻoP&9mW`,˒4C,͞vou|> $T@v]ǃ(,$+baۙ>ٟf9)b)3,oXoʜ5ϬEe ~(P%O]kL8J#>i2:bDtDD*U8ʰ1Pc$:2+Fx\eDR#kT(zQ =Z_BG'jqo+7u;]>BBD\.4=j*1_0ֻ:bu ,)zҥ`XvK U D(l۰!FKx^?,t(jP1/>x@i"KzjHޝ}`rvJ).2Tg$B*a A %HaIABpeMk`i +_ٱ;t_<_R TB S a02RsFL5V@0H(XUm-pq@(Ua>`OWV:|f3\+l})0FbF7-1G.s݋sNRMǵXC .%Vk Gj cǪ0cۭzœ!ɬʺUAY7RbDuYeoY7rBBlC"lllPf;qH@ qH惡8fű@CqƊcRǍ2cB '?&"FK}?Y׻K3(]x5k hx{_a`n[ 77u3S1ʁ$i1A*IHD-@̕8h^M dc6i2su9nv7$TFZSHhFJ cj,`e̬-0rޥ- 6p<&j3:2e'k|</cg+ǣ'^B!$=|Xx /'M`h;Y)"^U\(nBt2]_Rc='莌qaXI>]maM,0Z/D2-a|,eCo/štA421@  k*epM% 78 f2*(Q 3OhR 32xig5b9qB/h3YzCx "Oxu113֛{'De՜n̺dXP!,1I!( !d$ILJSs~)V kK-\TfJpEX+ )@8D  L8!,Z)b SR`EMpUj ,扑 :5{j MM)!rrMFX9P^+5efEQNycp&0N tSQ-?|1'K?{Oȍ_!@$fр?ef`wf!X$OWlv벺ݲ| XZ$.Vvvjb׿Kj=e7&:X|rz^~&gLcSna\i#YUEqM0D#XPB-B/Lq)|m;~3WV~ޥ$ q@+qW5nwB-s<BvxJ}4הҸϜ RE~ Zp`|cbo hoZh-~] @tWI7 T ޼Z0@io.Quv37k:MUtI7-TPY/&!_JQӐ&:qFAE9pX$02}Smr٪̳:!GD 񙓅+ b>gIAfǶzDCy25iC;;4fRҀ<֫S֛ Tý:/| hgLF(!tϢ1qQRFOoE+ՌUdmE_ .ndӫ.Ͼ ]UA sf00d4Db ɂG-6 [a!df9CC3~2AJ{wn5CQLv{b6sf4eoO7+*򉒊"7aZj0kaֲìee-[f?Ĉ7A4:Q % k%K27777:w|i麿̏|a_x.v}Ӣ`wpqsprU§= ݆r #?KE=W!Φ &OxwF ȅ5uFaJ]8Ям T?_'u)5Wii8fkĀTcW'ޤX֜ZRqbu0k) )O HJ!rEAYttXJz [ P!Z fffŻ5"dL@Ph/8=:õHJ%-s%.dm:Qh00tY܁/ vQ['ꭷPYC2:ፓx06RNG/Pf'K4檒q("kj讛Pg5RgQGaIX]v*!]3PmE\ l0>*!9 U2ƜH3NNzEʭnWdR@M%*k.mS>pVc_iϴjmQuTԘQ*J HRŒ .ѻCO/h $.A-nI%d{]$ͭHw[bBnڊ(Պt;l E/H*Q)ٖ.ÉpŴXNS,0FVk`:c~pׇ3( Ly19F #หQW\-p7?Ҫé-GI+ ]9E j@~]']@[x29@U!Ӯ;ϐ@ PѯN!$;T̛Imtj**pRA(|vhFnfjPD+>,-IvfdyT!y>g2WdGWM_N&'7_IpNLaӰLr8)S YQ r{XE%썮{Oċ E4~JW1]Qe}\ jmAs &T;rٯF]Iiq ഊ/{R_aΰ'%T9匚PFB3NFM54=frҝcm.=^2BJ=$im vnDmK܈ڠ|nSAZﮯW.O=}]׽ߎOlD-AQx|;NyHuti_we+:8ffs&^p|oeSi.#ؑt)휚4-ΰ7nB$-h#ͩ؝ܛK#sCy̒(AycN|??{-V-K~bR&B#"FOD ` 7F%-5@Md4&ܟSʪ ѥ2Ea.\2+N1t^pm|,<6ɍfV߲dMzۏoEꃕ1rb%ݻ"Yyjga[h g~eÐ~+]1WjaMBp[98+r:gv8䕳h1)TF[ȗ})JoZ}Ht>N/u y ǟaގEp5,tUрDS4jZ8t:1 tbQP:`@cKzWtsbj25"O{qVj&Zꇇnnہ$;4Em2Xo|d[ebjK(/_. RqY኉6?(w-^8G[(ӄ$ea/}4cЁӰVQ*m<&gv0uٯ_UOT"9^!m9!Y?;TM8^ # \h_ bgƽ5YVI% u<{3]Ruݛ`tg!׾{3liKF{fZ%F`-Mu@(mYz{JQ)t߼ Q Iμ!p eI\–(W(L2 '@\pMEuBsexi S)Qj0N$qNȐhEЌݒ=757 *E(FWX?TF5Wz@)C*75KTrѕ-TnNXUD%jF90*5z,֮`506HkBmN=/DD@70@#D|$gPD%H|7xͲ.ݣvӘ;Ev'^hR{w0;p9,;0(a.%A1 e mq^B+nlTIc. rg5lt0Y h`km}mʹ=o2On.Y.Q;=rhF=ʍdAJ5)G?g)"e~LW[Tև8w\~K qټkSvʻjj?=BΊw6:;SѓBA&US,_Wz)qØ9BĪ'Ki5t Po9)C~ =ח^.ܧIu ]NW.@H Iу. zo4d`a1q0Tuj-`˯L^60r8l 6ADj{|@Nۇ@i4bpBPw.5#=]\ûo-psp7h'/y2DXb),NpSO_",ʖ7ׇzG72KOމY13ewkx;ɷqr69;=LG@83E>r-?Q~vp4Inʵ6YΕuI.ohK7o(BBu#i+^B@qR7iX }`=h*,Cv2/0>F`t h$ Q bֲMRL)p)E fA7DjiV ϩF5$(]Yo#G+^f>藵{,`cC"S^6`T|ϾATT'jp r.Pg? R7i6Z$}Mn6OKŢ f J<1o A#.@#U$+wҨ]`OffewySW` y<^|eجn:@0INkeӡxn¤}{sYv',u(|~i=wMp`˵cey^{W4[>+8aa볥]3SMܻMujTi'ưR_ɻO8G%-LJvQTʍz-5Ok<ԉ1|/~uimmQ.˧{geܜFYN{%K;[Ff|vnyӸ]$hFX1M;Ao&6o?;cw%>*#P.=W7,7˝"$x6B=w Ǿ[:es[),*;^K(p =z)8սtlYM,.c.A Qk0iuӘubo i$peZTuo:O1΅10$9"pP{4Xj. #i!O^>lJNr.uѤ՚Rhoݚې?.SaIO&}@̅Lb|H%j/1vI?*״sC(-u~)ƣ7O\*>3+x oWOKK- 4ktVA+t)Z;+W}42XɻP~d&8w\e z67lCDuƷEqP~v "Ě>6&,|a_}g fU( EN`Wq1tr:r=HﻙqM=_dα#62ۏokqbWI*SN|R6iWl#&oVH+<tTmk= aBZ:mFcU}N%MvvK \G,t_B6Ŀ+(z=nȂ#kcϐ48+g!łq뱒T 㩧IY–Ė1GeX{{k}=Wf߷%Vn~[.u#XS' 94|XccU'h=ͪgQrj=\6}~Ɯ#1Sjl󝼒mOζ3^3c]O ˧{ge'7(Z,)) E:A]1b 5 n&`Ku+*<'F?͞fs"NsrqFR3r4[3:͐,P)jR͐`L:7ҥ+CV;|9dMk |CMOCR@:vwLqGnw-CRqnq8쎝h b[\͚#^+q=2He$LB)9RRB4c j`˫tU熂h"HVr*p|Sp@ோEsJhWacajRràqy @l/IRJiK!ݨl/.KgP1ty֊%mSo, ir#Y=Uki1-x7u+㏼vN8xϪ+d,kdEvyV%v$Ο)K/-iz.# U/ڟ)[Y@w'] N RycQ06TFQ;^0La"" 0Ts qwwuâ1z\D&' i_ӪTn#Vo'ϐ8` ǣQqdT\%lTF#"6ZPtH q DSD HNSR-eGozZKxnAzZ'NZ5R#)rˊXG'Uh؀ yg0 h !_46,xo!k"` VFd ǧZ$$",9g/@0q(fTPB@[)h1{b1&b.r6A* AMx5hAQ`2XeEV`,l^2B^a˄.X$ݤ$zMkaS괖ƾpYR}JoM{>Kaytv,{=L}N~DoO۔r{ " 8g_`F,|%L?,RQv|g&%Oܾr(Z0*.c3u9pth 6ܭ@a2UFK$n92# g0|? cbdt+܃UoGҥTREMLjDl[,& ˝!D&)|1CPI #;Ƥ0bn )|i@Hύu`04(ZD:JÜXI fk%L>%tw2$&:EĂ=2GJqS1& %VBn 8grb&0՛H L3;$XaϑX4)_Q]¯wN.Sq;}`?m^?ʬ?Li^(v8k:n,q)K/+`cSn "  9@(X*Rgp'R;S{=EU Hkp0-H~8bd:D"v p0H3g]TG ֔ d\,eA`paHyCK˜EU,b CIs%|ec{ќ%~1^ 6h {9 \Y!c,xJ2nM3'g 1aqeJQS=ۄO4U9cXkj9=VfO&`JfxƎC#*`εVRIrs@$-(X_fnR9e24痯@nQvfHkͥF-0Ym 6!X pjJ\*/.?%2 @^hZ;Ho [X8 3)Ǘ´%9ta\J8p>]BR 78#FRici ij._|QOZ!P τuj)\n4󠐘 uĖ.Q6Te(-4PY;-D:+^2!( AyK2 Amv+ǂ %JQ;=vb%w"_?A խtݱ_ȵ"g8s<ڬ}r<-X:C}͛ O4oH6s0&J㘞֤R0 ;{;!ӓ]6Ԛ,IT*w2ɎϪbZ~|6jXb%FͣxP;awvv_/ٗՐO\I[S&قU/ Ip0-<>sь$_ɼ7$;|' .>DW>ebnj`XW(+V^'fx2ڽدs~*A^6kE' syjbQ QZ~2^KI!B՞t94VV68.?rXq(; 8,kqĸ|vS7R{qruu'6_mWk?*]ӐNקHF"4ܢP;f4Zs'nGSj+.FVk\p<;vCRG ue9eCHXv:p(@Izv:O6FdݴY$l.=ռT6kDpiĎAG*<\x [KBH5PԿfZ/ rXy g^~87jZ4 Y͠nf}V(^B-JCh"hStBH#CRՙxM$h)W <9eN/̾\bIV'wT:%]`6uBIyQ+0 *jFSj !NuL{w W`Uމ*r}31@jF}5t6CuǰU4c3H%cC3G !l;yABlJQ?KVvV%+."e9g {Wxq:Y!׺sΑ:fC kVYwdYds~Uu%B7l%WӮټיȤF9&K-"m빆gCsu!f6wq'X99,YѸQqoN=CjCҏ#ϑ ,1ʋo ssd]wn7r 6|imW0 "J ƕ$c6rK@0T ^R+We*AhHsԆ9wZC?t#*(Nk" i@(hQ Vl:o#;m*hIP1Цl!@ $$'? 6y0Bx]i)tY(h¾3ZP ` S4['yw(}6eoRkRKZG I7`)}؉B@]v/7/ЪEsnٯޣR~xCp {c,agL"Yf_ }(Lώ\6,.27:}߼L:5xz8vWĮ\@ .vQ|\[IL2`)ŊI"O=0N}0} ^2/ O=C~RCht!})rDC_FS;zwU|`QureTkV VH=TF4cl0dG(*b">*5$( =s=EWdQR) \Y "J2㩦@LFpe0K.K33mؙ y @@5i8CQP`Sέ'\R )jth1 "&&#,T;,/*cZ8C:NM"-vrQՄn=3d'~LFc.dB1^bkr[Zn+ηrܶJR|40 +DT4b,h A=#Pultu(&M5ч[UTx\ h)3 ["6.%U $(Ge&Ձ>Y[H% ӎ 3FvD:Qa%\d%Z [,Z&U0B20 `2@ee)yBiD9gJjo[ 0U'Aɂ"O1s2 "^C뙢ZNXqfM 0B1"CC @aH8 >RD DAXw7@$kH`0k?wFib?q>dʥyw`/U,BΓ'}: ȃwUiSTq=$-.ؐ6#46>r[;[`< J)$(Ac1x!Waa]@ F`9sg !\"1`ڇ x`p@R` vA q07JB;l! YDwNaRAa%HAR| AH;3XB9iε8[a*RŒ X37  `OŔ౤RЅCY`diݯlha {0I`j-s(`D"rp˹!TL0U`VZ)| `YpJ!Tʓ,rh* GeBYBq|:$E<(L&K( bKJGjICV B d.,Ax? ~_vS z3tSpk0_ۯጝiY6\"aemgF[KPM}50t_oc~ pCo$m:8D{77O\fM 3kr}y7$+5>z?pj/C_J_w;q0Lj` ^AUέLq~g4O \׶$Kӯ;w9`FPL7\L/OKiMJxJ587ƃAƀLO!0Z[(Lak 1=Z}jrFi̦a HSʛ pu30 &D+TyJcĠRކ:6;;HkN>q״9-۰pw?q1 m}A.c ^櫊L5teA3guѪfLdwc1v6`Z2v5#\՝;GWqrgR{y_+b ݙ4_NBb Oy,?W Ble+K@ ?˧pM 7i0ü7z2H/Jd9@f-9!J+ Tpa ؄ u"ƸԛwQ_uf+sq0i=9׸lE:ﲨ{!ah6fuvClMoKW~3t[5 e~%ٰ-јP0j$vhIg\ N|t{dz46ZڨRi흆FX lL rl]lg ZͩXI-+eOI\fP,>2]x¦ %[$D^+fpqLs!*'h!&a4[iWD4wyJ†*h{+PEp"8ꉂq--¶i4F "M B9m.'p*va? N"_,| jg^*p `W/*`x0 Y29ga8\xQo`8{sCGvf_~1d)t"ZO FzMKz:sd}mhTK ~A-Wq[3 o},%q`iB2QKqtJuR\U[+jw'6 &atg6.Y8V%Knݞ v?ZmwosWH,"!tvLGޕDܝo]BV>2Ki0b7FiA{R܉ߖW7z2۷h-2ig675N,>n/"e ,eZ7wnrm\%@^.M&O;"Ͷ{ lv->; {{ ]~=O/1L>SYaᙄ' ί+>=:9's)nEtت=gn\ ~?.Ɵi;H y֎GiynY-J g!rJTU%b=+QcdboCOA͝p"E[D9笩yLT~u\Ouhl q;'0`T9QWcxdE~ȈFZjDaQEϴY"DX+Bp@.e>(]0 m*rԩ 2 ZY., N\VnjVsF"^J#+,pC!@bo\Hnw i]!% ɡyHr8EzWU]U]] cUQPQ&+K1m?oelmf΀v}a_V|򨪟܂^c`z'@_L,uj~j51# %_{3]]^ L+;l?QdFح}ef!ge"cD޵WX޺զNd*"${'0SU>gJL8-h1 1`Jr%}IڝSԅ.8\^FIWmIJub=x$0j>oLsC\d&04Y3k%ceFqf)@R0"[=?}_ާ2x25!H&b;2x.p` %׾)'ܽ/ҁK$.غ2 XӠȪ̸*XnFTX[NՈrnioaοqF%vikh)oء6:Ζ;˵Hęn-ed1=[X ~L-)Pši6cypȻ~-tn-80ٶ= ٣| :MdtetM} PR>d:].7҂xwa#^*tv Kyk~5`Լv+Ocحn< 7`|sX1!9d@})OP*UͲr| ߠ:Tmw4o=< G(NTOQsp8R,_iz}e(*Js,Λ?([-em&yb34ȣ!k?:ӧ4fה^c Ws,b;#tጶ"gj"\ݥ,:-1]R:[ -aKok.髡;lsft xN47.cXL zKX\!xSZJ\J.·$13am5}MBhw C33wD6o-tܶ,:>$}^N[s&4U/x)g}>zYDbϯ= è\g)+.تTco~atd46ڦ/<1/ -y=~ RES|o}bU̷*[o,;49I:.4P CkOrB8?j~-P@_ / j1hEC wuRx8Z}_'5~ ͔8xjgSaU =u &8|"Ibrr$?DWW>aSmߍ>A U-1ס[go. Ӷ3Lλ 8(]jҮ6:u\v-5J#qZ9GʨpLW_8!G*]#wUΑ+jPKflLmD ` DVN)C97a+LqH5:i3R:,{kĂm_nr-^ӜKM & u@sV"k-t5իC>"tnS2$U %*6$eCF(rbyHH.${b 3u2rbGsdV/~y7$6 f рTKo_ͧΩw8xM$eLHΆ\J Hyk]8CL]ETkIJztRO_) {bDKbP]*։qGȃ݀3a C\Sv6vV%,g3_}d1[a h9b1@ \l=CRsCRrS^^* U;8^;U'!}g/Wm5zR5RD*As(:~Vů g%wER#mo1,ھ\~>Lɮ{w×1$#ĿH""#1yJ(rb iLqc۰9.c)\m_q j9c#'SOe,ǑɽsX*T Áma%jD❩6NEI~*ʻE4 1/((F2hĄOLR2H,\h0{NLtı.K t[-{/fv'#H$@6X`mJ㽰yXIsł({GD鳖3`rgWPT5)2ov^3I{,tZ,5L{f冩,ī*LQwZuv1f2Rٸ:LLSKZRnkL1<gcq>+|,l (٧9P!i07e$/}\Sl}9ߛ>MmGثJ*c47 $h.Skä8:.?"Ty C`+p"tnFxǣ0FVN{lF$ר37(D jFj+:vbH'&EdEB|W>.k겾JEID~$A$Z?Si51y,J[7wՑҶH׻__5Ru t ~ݝDF]1z'fzO6RWj?`ށ+ESNj ؞:÷?@Q"L+CqA8ci,q*gkz $ͯA0i<#UZFTo(",/&[Y4Les~@"чxYm›fu }lF%Xc2PCQcG%QH)T!KġG'KtI~i:]z:JG:Q(ӭd<14!n"KuV򗒨v,S1oD=48 GD 9WIMx.r*$s.h(QԱfRޖļɌi5<਱#،-2#0 6A,yӻ[Awgtw3,>A]^4*08+),]1wZ}mWςԭr76L;}jG-qEO)/VLcHVY__Laq>":6,۝^{"ԛ(wl1.̓7Y͒/;|yډMrM7xGl3:F \3)?Є Bo/stNOY?;h+sj%_ {ǫxkG1'yfQzgp:4`:]ʭJz55:#sO[Ql%LeVr\)7_)ʱ܉x4^]ѷ| $Rq5~Zf:ͳߌ}ncҲ[ JH1u7YcwWax+L+lUxI Э+wF Rhqu]oGW~[ 8\$Y-~ZG$*/~CR E4 ⣧Wx50LjBu?p}Jeto/6CoH& :U͚VK=|܂mڎD@j'$s \-\X2!̂fZ+yH;1ZLKF^:^r.ҾD& ݒiLLC/6^huƳ+_Z 3BYuqF6_UlxW[$gfOXN앑ȓsĹIgBuG*;6VW?T鏁Dag?uia~7m@'pwi#҄xI 6jRO埛L8oջ2iU+8\)+¥PETqK8x4MbᒐJ#<d yj؛Bia8d#K~r[}?w<+*zf<όɘs2b8)^X(Yӫ%N i.q! ',Q}8OʰqA {wx9ޞyC!F йzsdzK/ vu#)lʊT ,sKxg/^-NBk*Jm A,݅!>m#ω!?L"ș>XӆR4Gn,\,>w SR՞envVzVfqR. ݤ\~;T )pM#'2= $ P3.(dM "J1*DɝRFi(>Lr*kÙކg<{$wV)g{\ |@'!15DK*%$냶1@Fhb: 8l/G;-G[TʺW@m|nZ .2Mr>; h$ QB)yDOm bJ %XNS`Y 14\4JY2\ r6Xeb(|F~/KrW2eQNk:aPjgf]ϪLecٳ7q,*W5/|TƉPQSFBs]^/IwwG/ćݧ&IB*~w; jkՖL<ӆLƠ+ 0vey_kEjр~·yĢP} *h__Zo`*+ޞ˧ne:UMh1Կ(4>LoEQX+ʴ}U_N D]+t& ޚ*MnB'!Njh.ߍPCz'viYPt'96;AVm!&F1V^cZ,]*]q +'F hG&|Qop Eƈ"P0i!"FQ|k AiLЛ{AW ,seo뭋eiGǂTzT/m;8~OV,/ǂ&fgxֵ b̞Mnv=Hǂ=WzO꿎_˪q*8\HhoRTgW)[/"(eQ:BKEY~$C+*xՕǹ"}U8 :l&I0 d,"(n#4v/W댹r|^Źc~F-+d"K)(^m. ~<+ܩ"4髶)ZE94t>f%i;ݚbyNG aZZ@0Gl)y`t%3W_=(Sn>XgFdObo.7ѵ|3Eԩ+=i̋J)xQgu:H ѾSS:8>VT:wIf3ZMuQ=mZV*'/=W"?er;`P\GQ4+┋׏翦=Ԍ9ޢӆ c )`W ?.v#v:zʏo!/dFZ=LyQ99òK7;|{EN*K o[H8:"6r羼8!p/B3c&6PtbeV>p^t4܍nowĻ4 x9lTqk kUi@#(?f aټXieq [YO륖 oJKd%ݻntDa-zՎ֋!"jrL֊ib -m~`zכݍC뇯I)fG8xr'`%P'< #2!(J0ZSh1!օ`8(}@69l#h zriO!YƆe9a WobjH00Y).U&V]TL؄U8HmNVKCQT)SmZm*gIP4Ơ+&V@XÄIx j1F.r)4frzM,y-6=BJY+?QG8Vncb*E0% 3jiZr&4YLDK4ӎ}Эp{1zkI_;?ڻV]EJ:` {,JV(缤LJw7ܗCE [?rfbO/yChBC|Ң; [O79^?<\_3[h+D0Cx_cJЂk.>PJ8FPHCU ,8 k'|F$Ƿ#^ MMafUU<6Ԡq+5~oSJ-k_23wo|LQS3e@ nh%R׵T6.j\*t2Zt81*Ȉ yNBD7D}4%!|Gsˎ*5-c_<늿^xe櫪$9wxd9@JRN'ƙ@0&7 6pZG! ܭK\\x&s 7o ] gTze5:Qi)VɔB,m@ۼˀ^`m 9 >E#m2J*j!6ej 1a\j)rsϪ C^!=UVi r4KR,+ @1P KSB%jTWkUW͝h X>j2#4qQ+*| O=Tl *V T XfJ'dZL[e憅4#ܼDɕ y:ǥq8:אu@g<,bW9,iQFc-X1b hq( $웺p$p:pҳK$nYǔ ^O4eӬ^^ NuEp6Pg?~yNJ{+V3T!EłnV éna,nmh \9ד]?OvG|W~WEa!+σ9HXƣٌ85pbâ?Xs3Zi[ii_ɷ7 elظQ;66myn߯v_voݐklhi?%ZNZ3 ѡU5ru$`El];EI(pxk!g$ 's N Q^%14^/gE,hr9:EdQS~}N'wv}F+$B?x۽ q.|tL<ê}g?]c9Cr7'~'1`~](si&??/J5MxY6HLv_BqhyaʞNf)896G)EP-%38 ZExpˬv(v 1J "f\U%JG( 9{ A Ò3>FN;(rje)RmC_9?=lYJ!0oˌAHH| BR7Mͫ3"k<5{xo8:^A'wWzv7Mzr>,&ks]8V>( |=%/e?uEP1C5qI"`X@>:kڬսVV&F@)'פ&S`:z1!-`^1!gzDsNl܋E#=0z񕃡57j?y;sJ@_۹ /;}?Ua&* ,A~-V<+I#6:_iQ\4m(y^-KABS|fgޕwӯU0ρ+LU֨2PCM+l&,``=j4nQ;5/+ 7Y+h%KcSd=]cp䲯~9P C!P8B{;Jh9 [~czH=jC Is8{O9o!%ڹf4Ag1bp"ES \UhZ#USdwV; d&1WT 2>'&s.Εx/*T:I֑ ^HH3>mFbݷ"{(B/޽:\?]ӘV8xV-/p8F/p͢vsFZއ5`zegNFo %tv|@]5{'Q?/+lrc Oߤe,1т8 Buײ~oqJϥSetn232q3M<7ZAyCApyy ёݓv|w zW Z+9X0/4fy*J ׷dǬ 6VZ?+uG-FՖ5xa֔?;>VX1sŞQTb*>9g3M׋1oȴI;xJ9)cqfBSV8ʚrb-5 rɰ%% Ҋj)΀F^ j D;VĀZ 5V#!9k{ !KV (&`CwMf2un4gt!L4HXIQ9 ԳX}GbNLLc"d v,̌S1&I/r/r 74D8;Ê6#I=1`l$aA]3!u#/:%ƵqMSL8%&wd@D]/$\x{CءWm1/Z -2J^xUTm?ȣ>cw|T&ISsͧO7YtbZ#}1i\ MK]0gB^Bd PD(КUy!/&A]ˠMHԳ$ݧQݝ(Cvʞ9J^=/ .۲j,WV77* ٞ X4LZA %@R)=R!Ŵb^Jj+7RQ;YmY9&n$zR#SE/[A"SbZS$Պ vbg; Sv1eTo^'ۧeTj4$"y7.u'SPFE˛㋺9WLS~B/絟/0G6qԶz rE>c1 Ѕ)Q!Ζ}#Q=o;Z)R^Ʋr+'71 -NYlvHdx A%g2'ӌ-m8iE0٣ +}B\mQrlMWX+m9h(Ֆ5}bspE;<.'ꨏ:`CP<#r,1AUqiT!-[\mY);qKXpr8/o.'"`$q)W"X2*Αsu5hl`DG_exjTF9QЯ`.pscL"Jn`I VBDn:lgA3;Qם8.D?u}+9=L#E:9]`z tuAFi5 WlW,Hţ [… b='̎Ȋ 7MsB1ee۞MN&7Dq9XKE+8?G8?g5+dKV Woړ֝2D);Yj|j(Faoˋ`ΙZST_N-XЅ ڸU.LA>gh,#KYWRR`D my5g3d S;\fbCO4ZԢb(rjRRdPmY |Sv ߒI`I  " i٘B;}2-7-kAu2?F-v5%T .HVZ Fb2YvUzKYH+vz, H'1jc_ !+ÂfMm(uV[xCwޢEw` dNQ}NTRF zgGѫ359:W/zGطx88B }HNHw;É+ jc;KdTw=\|Lv.8l݌7!|@Wy}/{eXjzam*[?k"7(7t+h\zfgp{xt661IHI0!ix-U1 ^RSy;H >FͩF]57M|֏.9\rm_x q/N@%m cD/Z9DQ\eeJ}ei-̉,:*9}r\X[{mA%}_- I,u/bJ $Ưk;j WҙBZaY ܰLGrMq*`耱75ɱpc 1[ K;c9Ni m ˠz>uJCyK2^"ǧ`IpTcxDB(LǙ$SסӞְ+9'mI0]ޙ`=_`1}ښȒ-q#})ɦ&E!Idή ¿͞pQ0Rhm< %jkfm0()׸|xY! O}Ƹ6dU8{(c,Ljd1s'4Io"(MFPhƁ2|xY1--NiP-NTVl%[m*8$J9&a6[P;yS %Z3$|˃DZh}q n!gD)'ul*0"16rNEt*Nm/"5C-E57bӚs^ ,n+Z iTvj)hIk_5U$4 /I=! Whc@ AɯElx?+D)YTIQ(d9A=ycL1%>\^̂emv%JJ[47pbƜ5gBQ0}f+ .3#߬@\i"1y+&2%3eJY1Հ"Ҟ1*1qkd#ߤt(jmQŅ͸Aii[c\*~i 'JD3U!5Ѩ=ݾ p5`L5F @7:[@ q܄l0򳋤  .A-8`fkhU c44|羛<:ct|0%ZUg5@Lsl@[yWi-$i-`ʶzlb$H1\W"+ёe, YADO1zy?3s>M캢y+ [cIfU)&dӣs-y}A9H#\ωgHå;ΰP:b~'irB~5 PP*^Ҷ^W6UA5^j6 W3>( |>1256Ij$]g 1[NQAɴ^!_L)2ͤDJe1>-Ha(D6,dXNJh{9M%i+rl02Tm^5\:;NTJ8(<00*1L&8;tK suOV?F\J͹?Uj`3,XZEN~ .s_}[͙"Qv?7j*EJ@U5z3tfK@5|kCF"? ̈́Jl|fOm8$V#hqSm6>r;M5ZڸQP۶Cva)QV,<Tn_qJ0B@\i|?a "@턩:tVM16兽>еyt`}i#,UxT-XKI^uJB |KcQ^HMYR>g?oE_^埮C8ӧK!ۯ]pEx 1IM/m.]qn- [҈oXjpЇIxh/f li7ujr,6v;~;L`UfqJKa'5OpAu#-7NFG?s4:Q'^u# ? a88y [j$&5R˜h|kH%c~X UɲYWYzwo夰 vt79ΰҫZCDݖb㨯h(%D]/%kI$di5bQuCSd3o}A`T&/Yw n2=tjUKl.Y ->ӌ" ;NxE|nɨ?]m=n!w(d^ §if2Q9mmo? pwd0>ˮEz@Dq&DmvȣTA0]zD.A3o?fcy a nS"u38uƣ`9ҟ_urc4Γɠif/@;!E^sK}zQh72jdr^cT@(A3FzD°7ilPVsaW Ücw6Mtb?X;nAN݁bai h$#ٙ Ik^C$W [&t[(Jd$ Afje=%+HBF 0F30!`#DSq棖:—J9·K{{/ub;0!Ly97r^;zP*Y0>8ZQZTKInBټv[4fikuƸ!L pc2P* ` f8"2ؘe*\ A B(DEị^Mw; ˑriaHu[K^A`R M",r]簏8I 95^PA|C k!Q}6ra ΁D }&]ͮ rU: Ԭ$h-XK㤎V{  Qi\i#ahX~ (zM66d{!10*uEA W NdX lZI@2˄w$+VH*kcX `K P\qPSEa B 9H WOJx]fvД( t2eJ <=-^P8f.@Pl 8( 0\`s E2X^%p#Ԣu~D7$цe܆R^3IZ>0q8HX\$xKF@Tu(7~Wįx% nq[)4n$`8z pf]:P4sxxuY"nl)%-BMfז[aX`IׁKA!5<<fǥAyOG0 kY#9P=5CBHw܁ AQ 24 ЯIË܁0vL(bBAsKRlE*Jy2NfG$U,uu_;a_XSrʽTN)唋`" >>n}~'JT?4aq!B$3 t`7OT<+!P8*_gnzR`0p;_3=֌(L~0~ ^>SGg8O)6u0@߆D݇C$V?Wd-\_~ F1|uP'Ϗۿ&;yY磧4O_l؟`7 Ô*']̷@1.WLR~q"wxS4h4^%}2[9&nf m o}*^%1nAϰ[,up1x}$$AHݟG; WO͙Վ.HOkЃq-X^dio?x,/L~9{>fh_YOxdftg0]^7qf0o_ƣv`# x~ %}?t])\/n폎3?!f'5#FpwCP7jqv~7L\f?,PׯI'=9/dfIYKOp eAʒ5Kv;g;lOV,Ye5lA9olAw h(+ϓCgg3y?;=?r\ϭ<[p!sn={\&ݟbkR2L@hXcB7ҩU_Kjp)uS9(~7? %Nm&N}+)mJz:J]kZՒ_:V}1E^4KzXidKµpF2)K)YaBWp;pnd ^:YwZκ/?}h5""{ߤ^@cp ,D$} <2(GpbqByM|sySM|s7~?0x7B$g\{gvqbSB^G8Pby|LĝluY΅mTkۍHBaly3 ` y)S$ȶd}%l%H6*#NE,R@5xIpdBP941a$"ىaO {bSĭwǭ7*&*F2 ϧ L= BF>炥nLő 2AF"eFOLI&)A:~VJN\p/ 4'.8q N\p₻ゝ;66OC^P~.um_m<1ȞVggϞ@_&ߞ c@P`"#ѿ [>Tѻ*UJREBR5ZdU@4 -POLLETGY}\Co4k , i HһnQi rVGA|$"bY]HQQRZF BЛ$4畧$$EE"%(A$ؐ*Ǔ]vuJbtoNᮿ}Ph[c˾6}u p}0Ii[[k]߫2DKՕdi3vrr|><]s{8ݞȂvzrw^InfI>^l sn#>$M#|NIHnjatr8]5.OQK/7TB{JUꇎ~9slX]}if=?҅w 'Mm82 'bI+-PA7uˆ,wYT͊jV4UYԬhcܡ*K8H=Ĕ5EQX\S4>:h":4I1Q)TOʆ$B) a]ǵ;:=2]e6mkΗbtQh=h8S<6tI(T@"O^h~:/d\G }]2oC``wG󞮘WEgKWFٳ([sw:<[g~8tL7:i"q䚫p~]հ}nsSE*,>q$q)9tKoBHMBjRSE!VzXX3EW/ [4D<ļ/t]м},Yޜ6̇fJ'z 3,*fuzT/?7?[E*ؼ: G{Ru^M5ɿVݛ.fحgͻb [  "DC"dLQBEXx)X9cwJ9aL5sڨ,u'tL{,|9TU2^׷ogV7oO_o~xo?7oet ";\^Mg @'_TOg?)F$/[rw1/|dv|[[șDz]JaĒFg̲Ж;Y'hr?sŒo\7GW?K -yIeϲ-ܨ5Ֆ]P2/j/g#͍Meu`VgJv)\kGw[v2H"7mb{"(!|& "eKkQv6ppNm:8d5SmkN";J<4 Ԃ!r"" ;P[tt1upW֝8"cpQt)kI LPA8dΤOdޔP;Y falDbCLhG*_e4l7L㹙UÏNdeW3>')f TU7+q4UZ~*.#FY`.Ϸ[ߑݫWawڥvW}f{jrV;7|uiZXCĨ#SqiW:4n\ ,Ҋ)d'۸{eIzϺx@rn ivaE5"ׂV ByW˩@ }i;9j a6 +N[9:T E||IQMJ [|%VJ,Ă.IN`IĊND|LUj\MUj>UͧT5]|ldY;joʅ~6̃*A5Y@ltYH OX2;zy_pUW4\ ݪ+P@ Qi{RHc_F7, m ERٛ =\JHY*6+r(*eގvFm6Z-Yx۴q?7wo`pq/>?gD24 G_g_tA#{!R nrB](Dg@c5d0T l.F *Ff"XbFM/kAěˋ0jC'WbI@FJ [k*a6z  0ユ ʁ);ʸi!̐ m0sBFUiem0#*Gؐ>d {.zfJcH%ƤWRT< 0Ðcevii*!/? Ê憧*f A9'q$@ *A lcN6 Y&0] ur4A pSSgS [vx?ƁǑkUu_&Hd7hj:2VA&I}2FyVh>+|04̎$9#$_LRAA lN!ڽG8 b |g>GAX[46v5_ k0TGNgHE tj XΔUTy͠jkТMa]r yR2sHjjT=J:C;njm_uMu)YGp ?.ġUIF}:gcڄfvoguɅ㕎?X=y6=vUN%Pd헻ߟ_A< ]޻\\6]"{nXt:1(joG37/97k牣Pr~' eg6re MkS2[Һ~B&A+.\U'MIlϠўcR vJA (アiɘ3YHU+f ΢*S,h|)Vk@ NkDUI$5{̶[̅87? Ojkbs^Ɣka{^Ӝ@ؑ2sbI z^&)/nT=>K^-猜ʯ'QV1sb6kjboI6&j1)(?`„€^W)cL?%֘<4ԺTv3. c+ڄ#Pc\WLv#&OB`TL'D Nod&I$Y5 kRS4M2|H|Z݀q&h"i7^)#l". DZ5Hsf/mG$T؎H'VhՉƒS:gUdC3H] ȡ[?'>(;CrnM~;i uMkNiZ__@> 8G&)Ccn-Ih=(BP AIztzH\l$ӆ L뇽QyۖgO8PiIٚ %$;R /@3#=:$hAkZ9g&{%-/"nzx;$'\(Tй-l ZV%,MyQ꽔jB@~q*MMjy>Y LtKC;ιgm#J4Mڎs-' +[4g 2;XkYRvEVëE|\6{lP5=^daUvɿwZ |~Olf_~~Qv]겮e!L^)<^ TA am!* bWd)cU|WplRrA`;U:cDPea g%[, CU1'+$ӈx! 5>baDWnsx2#j 5{m뮖tERUXp&c3)*IJ>/&;GYT s5*ECj\~fޫ3rMw~٠&@XS+$C+H,m=xLkEЊcșϳ!4R`b19F@.Y yl 8)c]q|՘~rvd#dS ʋX;0WFS9k 5]&Zڡ H3!fӉbTS;(*]q8QHqXoGkGFkڑa Y!RX j4gmgXتۙ`\zmt[/mZdw innȂD$F~57 m_H3w@RsZߪdB#"mtU#D0<ԏ١&nPm{SINYծ'(Lx J勰}@)伴gy{ֶ=ù(DeŻ툤H9|$I~7 -y"0iQVpE+zI RIT!yۖgJvjxH:\'ғD#m!>jdSM;m MEz!T$mۇ+MVQ%mhQgѨi7bn>1ome:̵Q#.iF댣$c( ڙ$:,؀QtH~(otp}|{<7LMnfPv"-)< JТLS(tdߢu ٻ~ 9\*QI{1lJtYZIj3rNf;ayLg; Yi0yhG$OnDQ7EZʄqn1=ȖP& :gzK 1NNB%#zY0@;Kڌ'mx'>!O@FA6kʜ9hȥz*AZ1S oF݇2獟J փ2 oq2wA猏v$iKaPR#4U:S0aTBaUqt́GTf2iMM⻮a/@HkB ODQf Π'#nXp,'Lp8x[iy;Sb"Jg (U3*lz?5՟ON/l*o#9 |NY˿}Nj>~վ0?bkXE> jM,ǫ .M<㉧-GxoG7 F+7vLf*$nhI:EM32iK0oFC#ٶgEN]yV+6-vyzӂM{Ӏg|oGZVSW|]^ḘX7[.mH(u]ɟUHI%S#7#bfo>gZb)'L'ͯ)./BI\Zhw71@>tҝ6h" ^/X/X/X/u:cxӜl%A?10ԥL+@k"-g@uG BЗxFr?oUEE$)TZTHP )o{${@:UïgkHwvDdQʑȹts|n3`B }Ѡ l7|졌WJwSeZύ=-k3v uT AL0*"xęNh!S<7Շ.KDRyiJd੪3}gwZng,.3OA΁* B@wXq<ՐLs8PsyTcMsYuNqR~=1@ln D+2i)0,$fplwjbJrs^*T:Zc<V\Ԋ2:)+<=UIi Z>4!9zj@}̅#C<~5q GbYdww@ 57Y>D RVm67<$6xRi-Pږgzlih6!*zHD$f02"H⪭xHm ZAr¢H-mxo[B =*ttEM3V6^LvZ oXǪhy4}ILu1Aǃz SypB ދNzGn t; *Nܷ}đλ|qsn>Vy["-]y\s|B/,zʪA;t!%W\^ݲ??OrFGחFX=.ݚrf~HUy^X.gp1iX\% YYMe=3_Þ_tRH~{Uu]O?C].Yy_]T O?.*!R+ }?R sQahbW;q_ k>]^V3#A9%5|J2onW. T`=rNtqyжF>]\8n>չssC|r.HdeUTT9PhK-T;ѶG7E;-zXo&pZL{任^yɮC-O_Z YOENÛo.O=Vè_8PR{+K^ⱿZ7Mőe֛^ApdniƉ7MIJJVݗ%6}9+wVK#錮2/=vx.;ӥ2EϢֹZ8W{&q%+qw|vZü-$npCiƦd(JN"ɤDO/`QhtV{zeMb{Б{0JqX}}c93*OZ# Sڡ͵t/=b:&;B!SlܓѢC)+2iX%hM?mpr#O3tW% _I'Y a8jD՛. I"D|7½tC<uq\9܇Hxm#'h,lGi ~,Si>Q~`aw+*=%{~)BbȓƎĹ@#MV ;K XղhZf,j;&'gcuxՓ)<Y.#$QDayN%=S'ʏiRAR."$z1פΏ6@qNRMMp`LJRy%XtTF .e/kq{{/^/0óck^.X.X.X.:Z幣?جxBTJhtO,} `u R)Y( //?^Hycȩ\Wb@j[=e1#xX!اVEwzXh> :/?-V F]i.u=[V! dLJK}.aR[jg߱YXoqu*tcVKn}f o9V;y]Z>#qo+{]J:Mt ;i7O%@*^YU+mIEA0ޞ v ݻ 6˺J]GcuXEJII&@#3^fdddD/F"LEԗGNQ';ȑɸ=L:u{hY)J5^Rgr{8hCk|9um͆Dl0Xc2݉ݻQ<'s1B\0sh[ O/K-aCj;w} V;Dn61<Ϣ9gw }]ž7Nql%=%?ua-36u}<(Lkf)k@m^v@KTmOI={):sDV(wSm7)86o+Smk߼ $֖>[Jb-m:WJAD&R ۂ+"b}ޢYsz;0؈?M%V~{=^/ul`ל2lY _;lzPϗg+)֬CJp+ [c*jO,S{9v{Ț7vz;Az %Hm\՝ꇓ .̦*#u©^, HOW8HÒlڍ'2Uˉ{Nc) pQ*`)Q1ŤdOe+kg֮E> ^JbAm4/Z{I%ouqu)%g\{]B`B;B>ǮuP:]+;WUݜ6Z !ʶb%K%^1T<hpu.0>(Y-Uzi{<9} 2@i<̈0bG &ٴ'!EN\bx fhPHd):!PHK3{6mWCO\!)rGe%۔q'.j(ph傴AO) )wTw?/i;ˡ eȿmHp*So)=i+A{c xޘeS!VmK!['!Np{rKf $)iCiޖlL"/GqDPrmSP'9`Q,T); IM30`{e1"~eܯ/M6nu33`A8[M:ՇgSi1co3I6]OS~GHPL8@"~Dx#V Jq)_җalB*DO5JT]!?r ㊒)Ƅe=#V![=XFVJq/>VNw_o/*$ Z9 e:sge U  nԃ"p|@cLCȹ5PRvCKԎa`Ցp|?P -\W9ycr3Ij:P9ydp|`HA"E5xV$qR9=_&n$P]%R4Ri/n0y~|3+Oq󝢂PuuFn"unAæC([nZ`^m_V[DJ{wA*òF\6)ϖ3+Wt0m n"б6ڋf,'p^ uGKkKi/!kؖ@躑L'G`<0ڢjUkF}0L`þ(y͉;gĜd0T|&1\q9nBq GB bs)s|X]ZS:u{s÷L.-% @Ecl֦&Blң0%Jj8 X܈bi 7{ rYf4iXKR[!@OHф0Mv&.ik}i܅HH#nL6y֫} '^(L|HWY&FĘx9"f"B\tT>)t&Ž%+jIV{>XL+MBڮ/q ܂U'e/c-*Ũ>DMI*t}G|W)qx;.q"ƈYCTxY]]Z̀I _'YVLƝ cۘ*7Zk*zYRz&mK,%vflNK`2@ڻ-0'@ꣃ#ҞX^IDe¸oH ے3Hc8mE"Zo\cp;rzX0,-3 s,p܆^{5Sû.U uO*K,!0@:,BwBՁ6I1i7wrlF:72VleO!J;"2NR4 s¡% D4qRIC荦Gy)\@7.fnHGiPF9}&ݗ;_&KDAodMVNU&5v -2k^s oNqQCUZM >^a>T:9("`7(m _jރ_MlA_@qw!2LQU]<Iъ7fκVEC "-\o0ǘZY/@&dv?v℆MX;rp\ sI D -!4SźEXf嫕JYO&x aX-c?)PF6ʸʴ2WYpf xI 7eTOHYW@@bzh'+OE xXUa v|KNB]*>v+OEǤN)B BQj;"~@}S|~e]+w\X 0YųT)(Mb!Lo8Jz.|hc%YL֮9Π=~QʓzLn#7<.qx\ݎjD^!(}NVoFANF+1:M* ,bxÅ3o|obNK+s|] ̅56`:D$R=0Ĺ\w< S&!sg2޷ܖ&`b̸ jpՐY3o[?ܟy;<)BPGHGZjSwYk}xxkqǓISmR*q3=Pj2 :NG %tUK<+mx=B@T~[ЫLY땛2@>ta!gqrnjO~Wσ]Zq BSc3ez&AuvjyBwi:PDeZ T >KbH`&AP-C72"LE Q\Α”@?BLb:@, 0>J VW ch]W=֧ݶih_F6u`Df]4HZpYNYZCi$dGrf;BxQV[~jiLWgh(.黾\&wCxBGQN!)|׫34R^yy2Rށ }t̫~r RtyrҢWft=Ln!}r,&t=nlvv&s˝#Av"N M^۳vVa, қ|`hs"X&0-^׋MDd mi=0A"qtR BR,PvH}aW8Ѱlp- W~*v$<]t7F$̷m9MZ1JN`)@l7 u]U|;=##J ,A\ڪ"4 K!@ jEWس:^w5Ji]w°}Pti*iM i%$3",\j~~$uaYPW!!y[> aKgcݖ>[~8(@\4U!{88V={ N4Ӷ*%x5TBZDJԒm(B6"k ūm$\>E! !>vÔ 0AurIN!_jkapJR-x.ItIQ$ $Z qjaG4"HՖSL\8{#qB 2aBCd4FCq^a _VF,ZQb+K%E륧bEȪmQ4ZWdW |$#21k0xWer}WsAbpWDS:̣u)q ؛d1{eA)VbFVmbBBn d%Izk=z:[/׼^:OLJbäiۄp-ukJ2!TCAa­ l),:زB<~rfG5Z3y|y=)xBWWQ k96wS[1/nT]&q?x>=]}\KS|{2/G:R4$yK7.^Sc<O^¨XނK!^gߜ&jϧ>BPu_'w''J\)9.#GEWΌ*hl\tʫٟO4!]?vX2v}*"huZqK:|JOOOO&゚8*oN̯~Iܕ,__mEi]\>VKUbyr sZp\ٹIt^x\i|z>/ 'qDB&,,(yH6YQ &@O%B9kB%آ(-}qIokC@Zҝ*^b38K|({~[$Y$A pWH8)vnq:wZ ߫蹂 };z KTJoE Q C/CxPz[BȬZ5,mQq SPK U[ˢ I.7 "GW^Qmm02UBtś/6`*6kx,b^]L^o^nCQ o!V? 8 H#탃.LhQ#7д;뇂(`e~*Oxz^ Ҏ_^5/"E gMu#Е/l*/Սj9p<BŶ48%N" +ePK(a2zm93Je<*It TDRT ʝ qpćg 'h@^Q0PO-)P6iѭ%恴HHH{kzqLAۊငrѥZ9GԒI:{ DPsEdpFO 9k$Qw׈VfI뚔Ż;rk#2(yx;B<8QQP QT RPU Q0Z0yXv㿦+y`8A1o[ /r,5LN{ Rp)h19FLb\a9<)Lj@e4AfP9y06 @"N/ ,+0qI #gTa'`]0W.7לw_Md U0H|AWW(9" HG +pB4(D@󀃥%EHqy)$kG6A=d)K"u)9"@)**ŐVC 34bx &܂h)hoGu$.9 y>:VK%M`]eрJfo&*q&kcpvq~//!!8l.) *ŝc&B‚.Mֳ=c.HP.hrHw BsAvZqY|i6NR$O28d`%0BNO F0lnq[ar龦lCҁ1;k@C' i~aZ(0rKt&!(xѤ܄l|$X0,HZp@SĔ6`EH!nǬvxšݎ[ NyeaٟA@W oJ+TJJiV` Ĝkb?}‡3s8O#+7P<5$CEgk)GcSu;OSՂ M! #CW 0]6ډ uY"n#frwIs(pI."<E.3Qt$7$w'f³Q!J 3x[naS\b$iYT誘}m/ :j0*Zv=j *Xg1b r}ỠuFmVj}84wde ug A (~%SD[Z7f62ipJQDھnoS"lݼ ["9TRޕ@̗Nk Wu6Ls#'ժ55r}HZkM[h8֧fd`G+ Am|%5}@`f[ 64yȋ5925oaLaC` 5BZ`[ɡ~@+yJ*ȇ63D;Jж=uz{p%f6S}윴ڻ4ǘs:^ #LihGf.1 _ebݡ,㭖 z6ξ6=جކm]qc;*݁j`q||}@IS U*c C(Ju 1}K5ϲ?߅$ޡΆa˿釓<՞m!3̌N>,g- g߄BӞ|2 Jj/n)PW{O-AF{zyy>hoˁz\fk Taz Rh5VGCTijDžDyKea@%ؑ*[pO\GS*85/Jh8P_t> zaA7í9zYwe ,'mMӧy[m9g54V7W\S0n~֤R:@Ml4scKb[IT9,T?:N(i.` RbVFջU^֖usm1o۳-NnٶVx|g)s엩i?2v_3_`ȽU=Vu4\>Dpc)" ^W//PT.#dZ>0}+5_7,Ȣ|;A2/|ڑ,g2UKm^m)q%uŪ:*ݏPPK_٢^ܼ]/o>tRjzӋw_K/ex)^yzPJwviizTVOsvp9C+C+wgMo45=mz!I<"k{Ed9`棋i֓q1C9BwF{ȾI/Lb)|PJDzG:ЁB5}kL18] ?W]GaS+7榼u(|2BK)-ἥ u K_SNexE ˄?Ad)3Hͥޫ{| }t[w]uwuivX>/7MtOhm۲%]|Hx.>o\kokoG !*$ ^+Ϛ,xjmH1+( [Ы4X>>h|L}='q y;zd=ZjaMqa` XUcc'%U|U&,S4I>>6H>b2~Jvwj0ޝ\;~`谷n1TPGBu/YJǃu7~o|r U4aBxn^_ ?}-7wǒ&Z׈ =Ϳ[-xuۏCX-{zP9]k  3ڊo-֧ gb سXC>·}:XPݳXV%>y˟p5xΐC*|Z#]&3rW׹F7mѼ.?}@v?~Ʋ99 6w^aks>C]7NoΕh3]1,fL٩ix;>*6&e˪h av=ƒ7LbB)_zcyc&S#rm׏@;OZSgۮm'w?x1T{Axm\a9Zd˕JK֒Mvc9+eCGcN.ZK$D48h2.0?_~5x2+X@02}dz</h2bNOsZߟ?cVaZ]̓N's?$V;B>2=jY'$iژHPxSFkadrшF0130vg3ST<өmyCJCK\/ŁIrxd*,]ƧL* ¤$HOrD̆МIVkؽZ2iws&E.90znv\ΤĶM-5O0RJ;Kg?Tnn/d1d۟w9!_`荞1 vj\`B[[ Âa2jrraH̒kiZ&)~>WVl*7+;ZҦnE@dgzH5䠙Μ! Mu1H﭅CEV&&bu4e_RzjrtZ\\Im<&R) 4}䠣j΁M+if,U7:ӽ{]e{]f5;5St"zƿmst+:j+bfR͇#>Z.&V,~I=;Qig_qqCꐉ#+7% tz/-C/K˴d(wa#7R٘4I'S&RՖ7I(?L0%Tj!$pco3kAnYJ`U #qp܉e.f2^p 20˫lA x芉`y:=Xug)rzSu'v uک.ߖw?Ͻ=8ZnЌ4tl$44 G L5;RY̬ay`x*GXio)~f-B\Э_s}Ţ0ۋջ/n~LwZmonۆ ߧb.za~2ߟ-m+L~0w=oz}><oEzv|`Tm;3 .N6IO}(nL]i& 2}6gnS1":S1F}N;Kn稝cj搐߸N)VOn`O5`[&F0NKXmDr`" / IIW _H :%ѝ@h_7N ?=޶mus릐o餴L^r`b0DM}Vlrl<}a]CB~":EFsp{ڍݦb0Dt>cv|^ TT&tvCB~"t"8&Z8𧺉 +#3*vm<\kNeƧR2eF=hL{"΂tT:D_]2$-R.Yb,VעZKdP#>a˽N+A۶gr1qFM3/y|l^93 !^zl0M ÷B=#hom#Lۚ1Fexb S<>(g{ZɥlIEUVp`Y@F<(H)P5ŜRm2?j]Mu-+%S6uI~D =%;)Z M),YNM&{͹ kAF׬e͖/fOѲEO ʬSfSZ //%q[[͝8 @޼g{Ep/# Ԕ\_2̶]$ pI-=J D>祬69,*ZK횂f=*|,L>i5, %;p2MCJs֙tʃW̏v(T@(SJMbZ$fnktP =-$Śj_p\"P= ?ϣ @u\m5)ci;n]s0,(/] ,`aI)uЅx+e'I2N FBԐD,`֊NNDo'I+ыᄾ8Yy~TϸøR;E7kWr0'WHRUqH**3v^d-EVz^eVumS/u$FO^+p]Vp :ƜmzzxamHP^$eXm_J0N@Q+xٶj,~Ao~|۽n~"_܌9p˳/~ 7_\^/֏5 : lxC0WPtR%!֢/jZALD/]H99 R Iiވ=E=Cyq ʫҢ>zMЕP*䲲Ǜ{n QT"(hBf)Wo+TYW&qRK?lʩ<90O"s$` >pc$xhXu$woszxA? HԚz%\vn״ĄuV'G}WZ:o VmcLIfc@qDa(;,4~vpg;L씃75Ȗ)9cftS8u\wZm*sX1r}-x}m~W>D0SDoNmZAzX;Y#$Y `3Bw#cqϊ֢ݱlnev:G(vxgnAb1+,9i{f?Viv3 S)}BkkA_Zɢ_aőqot( ƒY*`ï wֽ_EV!/J>8{/6zwjELV$Xq(ٌqX\w%aD4n'FK=mz)II(ejJ%fGžO"NLUU_M1' Yy."/̕DUCLe+H/0t-q8JhjcWeYL&1Fk;9$o`Ǹ05gT봕cruu> ?_$C@oiHҹ|Xku5"{ɓz$/f+cnqň~L(isy2O2WoK}SZ2ku{xg鈳0]AXY,կ2kбXKil>LR)mYafS1k%eU- qG?@y/f{ ahRbێ[yV2}.sRuj9›"`Z!j6hI87K?BӦNZgAyBs5I7 ؞L3k:kx ?sRϽ>_8lfZ D}tQV|HXC§@Eٲ*G}K:'5xX+ꍬT?ӻwJzWp2M;DR*MZD7c311qAbFK殂Č*6uY 'O}=7c7~ZE8x'f6NưuIK_JhR9p@GVP%+]~8 X勳J"Aki">P#QH w8\҉C64Il^PI>_dz (&ot6D R)[i7*So ^~Eof? ΢tx΢e2bd2?(RƧ⩏Gm]vdBAk ^ycnEû?.Qހ,.˛榭}o6NCK!џ뼽>K].>qlBoILg#v_P%$;XZI»>9MEǙg_} F&ݢ{Whi pڶi %h Y%`rO&W8B ,,(6槣gW(9{b0r2$e¿I7>)'erR|(NCCcEVXEDNZS$##NS;oDQ%blm?O9W跫:MyEx틮OyD6{.[d& ;W=ƭ_*J֧}^HT˯ּZk5oZJ8l0ԧ$$J5Ŗ(lbJc_08 ĴtiLQ@ Rgɀ/?Iw1\1=pu>󏋗v13~~:K#ԎWrYkqRJF .y0c0(ϻÓRQ9(_]_kKݺղ س8#"Y3^;=磓,0#` 'VO7RhE838bUH'I:ݴ/p7<5̀'$kȰ5h!Q3?m(| uX;ub!е+ zڜ sKuՔ?5fpr䟓>Vlyi3I#\OyUE#XVLla-l6ٰmWي}v_:MV|ogtOhq,M_1JAž":}lq6ƴN,✠kzZsĎ&:UDaյ?;S[HtM}8!̶IV'ml~d"wά(O,-Ԛ+a)Hd'"* !a6m!?iqnACg2]hKBθC&qHY],m\.-:CW! "!Kdf4x37:fa˝دoشfQK%$Ab7TC9 R#HJT?o8P)uq.ؔ{{uPy8:rf\D[%whjrD 0T#$ԟJ^}"?4M,e-WE|.O?LewZ`y3ΤyS/Ǹ\L6#Yʝbį$G9ZI0B0-'~ V(F$:[l.l!@.r3cJ=;8=duo:cP٬8lzm@mvk1 Y_t[qԐk[y55S~ VPݙdsT?d.-E%eR 60[Ltl#c|='Ol@⿃b?<قewaF#Xtf%rqLԡ(84&a!#&N8NKE4IH&9+0iA0!XS"JDgKѵnM^V>s⽜h{YuҐV3ՆBp,#_[cAr8!ĄZc#".W'F$&4$mB"Gwik _IB L03(X DS"qZ18&ls)"@ho>[RuƘ[}?sZ:ɮu$E(`9Uԩi \,QI Ӳ \&/)FRnqL/Ԁ\":6cl??x- LyC7$zR)eUk9JE8 +/>ȁҰQRkrn9t*)ñ7\X/zJg|ux@l ¸"1*l+u_"YA}8 ~IF2c!JQzL~|L`(? C`R܊@ - dcα]/%!D7ΏM-FjcW]Td# N' E?RBZ{G:z>;Kd$~x_?5g)7yrv;Ydr/+W>^#SͭF+Z(L(fqcFT'N: &c[f0 PZXy ~I9394˿ MynHo..r4W_z~Zw?y. '[sqbbQ6I#kǔhKḻEqP$b.N[sכ{ÝP *a8nuNiv,QQ AláP& Rj0+3NF<~JBsf87Sa L,L+ZMk`8)  s, lEJpXqX2v JUHB[*X5B#rXn)urS'ἲQ(R EIga,fr I8l #EY~0 1ԉP2V"uo om H"kJ"%`b4<0X+At+d9h]BA’Z#",'+_wc`@$;[l >LJSy1]B"Eu c2-&B^+; @ ޗdf9@bXH6~dWCϜc-iHR abV: F}9S`aA&7 tjc9[`iL` LT䌣zr|0^X#҈d<;-ͦKN *UPכﴰ󝖽DNpa?7ߦ\o&4|JwI`_a߾-o@fD3CUEG*rg#K+xΛ"I.h $4&0JKg3-cF>qnњPD7c3"7FF˓O0dΛQyŪ k1%=؍hyg{3p͆1ҽ@/~)ކ> <|=ioǒEÀ?"طk#y/a}JL$Jxo5IIc̐8%:jUk 6P Nt@S&'U\JKyg~di-ي֠2H\a(fւ9l 6w= *-Ɵl ,I"[G9w>/մxYxuB!k5y}%{HsG|l<{k^?=1|H F\g`׭(?8)+ӳS q zbfLH@`AW( 'eFcP(ghԟRHIKNOT6ooZg;%gzMۼBmnj@64YfozViq0zP&u{)*>LF`./?+T?Ҙf t9#z!iS=WXv.ˤq$jH퇳.[Ƙ0 M (/ a*."AXaFv!R>\7G(>9Kw #fNKp w V*pT(! GIհƓ嘍֚혳Fq{LNc4n530NEUH FsPЭk Wd{b9˼N4(UYFjUpY*pGqaR\,SI탣uRRB383dk̮fms_VSǘCWL|Oψ¢*Pg1ߣ HF~̀7M`w^p j1z-Ny+d=26J%lL5Ng#3cdz6)~a4GGpW{F oՊ6<aȶ`#sDž1-dd%~gb Fڟ1JY}e U)&]Ń9 ?;5I 3BO`E-nk TWQ7{>Iw`h8ru-DOn` $J_fWGfIN2;% T)_wv Sr$mK;Df<gV3HzI;,z8%o p>Jr]I qɏ:ZJj4{$r+iә2kvvb'ZRV?7ބ=mG>hrN|v^ZjWM2|ɚ5 re阧~hBmB%#ɇ/BG"0>Lkr~yba1MXYgxv}鳓 ;> >xZJ!,VrxXɠ(Ϥ(?C\8+⫑ NU:-li|B0ރqu;E[8XCx_~+B@#afPzͳ9,! ?0Sp p NT};q5'ӯ8=wۇZce&{?'b^$b^$b^$b^ԉx1.8wNIY]'Nڙ"=χ!C bԝϱ!<󖃟q.)}1vƛO/@ <|  N\ςp2Nf~_<{8JNOdnkw'pMF{' P  %ȬS56`!98ila@I")~~5/FF@<$QFBHDĖry*bN߆,:>Fd&9m9&&,9B$F89*Bqm0b;T*2 OYƑ B@>絊*0bN=R+.B%w)FeX[N+R g09g1X1IhQR!#<*ϸ2ccʄRizx̭, 4O%0Gˌ"a._Ar^ v> ̱&+SF ʹrb%ɱʂzuEeр.g!@.O9(/:jf)5QKhA0e @X4H4c~O`Ov6b9O'J!|s_u D4ϟ$?<}~Z{`ҫnpst%'dQ}9lU>?̦?Oq5sлٟwd.x '78SG3 3Is>e D~KM뢳:_t٩y/sH]|=f'Itwz 9(Y Y#86YP$jio9C9R $(m#'%\Fk( 6E%"*>dstk|Ee&rI8J3#E $h)'x0p( ׂ"RoQS["\H j2f&HC(?EvuJ5o"dѿAL4_2UGC h"@JAnp; QpR eB3 x$hЕ}w*(${^­荌lYsD:DX }遫AÝ>EDᒽw޳iyEp30JnCQLbآ @ jzK%>ӓ89Yԗ$E3ra1{1cr?dorߟ)Xs. NS1^ⁿ MX)m\E/Jx.߆W~4iL-wy§\r“/?QsB.‰ӿ#wsC.}qw΢%<K|sM^Kڣt+A딮F1v7-݊7+ݺExJYnd>mBd)5Znn{J7mu!9Tmu+VXT5 G &;5zf0gaZEaFR<6xLre׈n)䀉7#xxŵ.g>B1$OBPFT `j$kOE"PSɋ=NbuZTii ?0Z.6r烳{+=IRC5?A-jsB(AVFDS !dExJ3 Df8me'v=W(~j Cu$JdKC<-]߫d.8;g"aUsҟ>uǼskE:nZN+rj57D@0 [VwwWKջR2E43^ZsLgλHT:d=ܝNfNLx$SQfwfr~=狥+F M60} =d4xvkGUݳ`Jաb&3J j13_`EvErɇs5] :ڭ1AR6'_A&߾gq)Fժj}Z MC˗^䚸Ӕ2bXjr/gk"{&D xX$% ;(s~rT 0*[\];.6E!Ҫ.| bHL"M+k֍mT^2ӎ@ (ɡK!mK1[{ Gm LwT$|L:\@DI9BSN£t2GR k`PݵC Ә@wXgBs-2:'/nMKe)ϘKNjq}UyggU=͢4B[9ڻ eSIZGhcmN3#q_1kl߉ ) T,N17Q%oWL 'YT1/IIs⎜ohCL`WjB2P~kvI$C"_S~w)ŧ/Oq[quc+6Jsug.ءGSA f7)OhvцNKB=l^|^Eho6Lz?ysC뼢DJ aޤ`,٬eoS=93w1TBH9G&euo2 ~-`4HwlWA<$wh:V>ИS?1'httz*yet 1h9鯩\' PvIgde+VL&ۿ~)"|!7eQ3]x @/ǖ\("|!'J?(Sz-| d!x6CwKu/#T5G'-x֟0_^ڙEXX4Y(9Nѓ2ɘq\M|9\4thT,ѓɘ^:5dʋbFOy&Z26z]фa+ihx!f#F3l'n;{KS\ OXDߠ+kd 癅jIjyLzU4&8=N/uFeQadDHU!gwdLV1)F_hCn!g9 b+B&} rSYJꓖ V*V7܄|gL}I| :G)П̎FG5o*&'DԧT!O: vı(Í@"8#@r##@!$9 c\PPH&׺i{aB<3 VR-N:gz{hbAP,Dm6&%EZ uqVϮ4IG`O(iD'b= e )`4j@n|BfhB]TReFHGGLJ\BcpzfszNJ83ЇdKXo0伝K89Tw hS"]d$] O{]%Θ+'|A3uLn=u4nDc9|#``5zr}#\`N۴$fZx>7E@"*a܉lsLpQ2&Kyr"R[Qڡژ)9u:I;YK꾌\.}yM={Pmo%eآbk9"]//ͪ=Y?b>COO[1 ~go gf(d3ݛ[4m+5{oU4yҹLhs/yɛ7%l>)X]|{sس'VtۤSR>\kqT8ǺA.6o~]L %kb͵⇫xV1a?4mV,)+x1{.RX7tcJ+;S&cLǫUǼ!! Ɋ㮐VW~ |;Һ-eig3Bh9=s4mqGWb*~Vḧ́ګ#;'$ Gl2Y8g0Y-e)VN+\9TvhXh5Vu3K)`I2,)+RGv:0V2flPvL.ZͮcNhZDPxEc5:ӐvG !B10V40Q'Y%1jCD~t $cd)a~is +k,'}nN dY}-kz6G8ݪFV=;i> ~[-%gs2.^- 'v2jHkQ;Ec܎yĎ|Ia"R'njcOG?ϾɁs$]n} _,fjvreZ?_>YA,ݎoݔYr Aؙx{ۛK|*HK$?!d6{T@▅HFuƙ*4{'yVd9ӭ0CcXԞ"^ ~l8l[dEԃ;ĝӓ'ړZ&Yvߴxϻ^yї ͋WU)"fɓB9h ?UL I@iYJ_Vxnʋ9,t©dt6j<Ix4ս<&Bj7<$eFtƉYbMo,q[OA`;X%?jkwtz!_3k=$EHAрXZ藍Z?Y'IHhT$A@Q9v:h1bCEH%ꨂ:gS(/{ɲcӮSƝ[+*06WlJ*%#eԲs`jv<8'&A ׊5.E mf5ȤC)-#!P¦*9 X!Y $ I !W̎ :׫l_yTSu`-~TڦFpEj<_}z޸(5M~CpOVN9XAGb(NKT$c{=qSFtē '8E}{qή Yۏ~ܐ"N55c@AdYxr5<h\ y5!;q.dc& z0<6"Y*^Da $;1uS!FPB!Շ ޔ1#t3z$>\j*Gpu* W$Uz@OhMԄ-z׹cbYi+8vc!njNqəAuZi8bZKh:|I*z<ǧu. ж&w<yM? sCNknq>\>$sr$ wOJ/_k̈T>z>}leZT&ŋP>٬,J`מ7Ʋ\0"A8 +^aCa6a~5c]!>:{7Ivd~B2/$Lp+Q >pf%h80Mv> e3!fWF&h] ݇I[㾍Ka&i>۳b+5\B֚$D"Q`XyP,E>XTHr ZWqش x+)E?bsEY^JSMAri /J9BN\&ȳVǥn-JɠPvlsY:'`fjճlX> ܴdW>kkC.C'|3[jU+ѳpK?X.ᱷLPwݷ>rI\dw?^\Egl6@ 9^|wyyqq3ΥaZI\|gWol> k(]/_G&q*ku A5UyZ[{~9ǝ ~ \a띄QBL2mZdsnJ6$fta3=M&(SF A¦VR&9 /n vsÎ.a3h-6N%mZS/?қ| /Rt_LƢہ]~|xy e=Ih[!CC.n!x'/}sn s%efodOǫfE{cAKM 4AhJ{1Z{|b\Tѝ0ӕ ܓKFGՃ3ccTF(.QA1gLL$}1*fTHn2IG,D!4k)ɣ]P ч@S !uJD\9bgo`uLr#@W9j2磏.g6-aj7&<&} #e!mUsYjti=@0{KDT'LLd 09x䓗Z4-Q A ,fO:')'Vr"F1fb kn5zv'g$t gSP!vQx!_Fc|bo!Cj>会7q!9G 4)R)pmS[kڤ-1/ [G9A V0JB]{oƲ*hC@nE$HrNQƒk+%EF%%&ERH-Rfܝ!Xn9ALBP S9Ft]E齓0xUz "8ЬF4yOEI g}g#s9 ǣsfvnGfzʌxqPH^t&T#*{֡ށuBYctqbX F"<:3SW )j&n@έ8#yޅtQ߀?vܹ] b7GD7na%.Z|]\}!6{7ÔvN $gbz0W0aq< іJBvyb| XV!U2w X%~AF ߈ 1N( Xa'aih OD?Q0\:*FG">Wwnel{/=:Va:gc7P߸ q&MF.!^.g!`+R\J=vgtuvts $xf%y \QkAY/Ě<)A60 LwʙZ k AAh AC@8PB*C.G>&E>_NXj &UsL0Fj0 2=raZr+2L⽋a ˌƱ7chUrTk㴁IA$I2WajA`,5 1! W%`F:Jbj&pTbyJ fB_I$9 kP W' r4>)L3Jgi%+S?H̖LT 8b9~$9xObN̄Z, H0Q\0L R֮h#`c!aÆnL.S#ay 1s,dtvvn'A}4,ѨDKJVwdnVflF(ɶXdx5, .B?4lO͐{(C9j--LUR37?Zn `{{4ՏcW&Mڕ '%cP(jzG<ƼIη<"zBr-yUשrK"fG>RkU|g(3B[t/ǩMp"A[[ e.V\W5OS @zT fH7l~>CEvFkT6rUǥXj5WLMx Jd)w͘&yo{8oVIa"=clO̲)l?˦Xuo)s8Bl/Jp魣=AT;6,l mEꚑ ~AU\%]8ݠFIelai psGSEK(=̿f7nΧܞi@Gܗp A)@\`❓;XPڵVg'9J4T&K+%+HK;gZ,dk7i(SNJs'oo׶#(K%H[ "[ c8XZU&UV|ݼV2 pI-iBdbDy\481 Uyz/X RM@p@b\EހdB5g)f˶S5vJ >y-zHV:`$t1=4˥ ] 4A`YSV'mA} ʦM48PI[2,8G`YLYcV@B̔J?Gqxw36ұ7Y)+˳Y^ UpLVۯKl3aB "|Ā(Gbq+aN5}w?"ݭ<ܬbR8*]V09J5Yd3L F[2*t)aq;C*<)H> E6IE.:@.WXS㽓SOV>NFJ"ayNN8߳ нkwbCt+3[Q>nCյ ܣ0zĈ$)ATH@Zm|:\߼M r涆ۊM}L]J6I"ftIIm,Kk4wM.uTr:>⼘wͫ7s@!XޤsxI6ٟAQCϷg?(An U5^p:jb?lLƄmYAf\+rYЪy2KWY]PFaT_+*utNWECARoR:(b-j]qN;˜ @·iU> P+AgDVsnf,q>>v<.Bwʕ__)EH.#wFU| )Қ#ejI&{ӋOb:ؘZ&Qxhfg̉ ֳ% ,gk ǡ \ftRaTX5q(oFwehdp kQ*r0#d;rHDsJsf-L7 õS\ʩ}nPbhM<$("#Xv+g}ʂ"t.Wc>̈^" c (^Eq sU@YNC6uQ|e1#tB](9Ie>G{9ZT+Uk}GsgS[v%&sֿE&FL 2?]{êwnHPY1Z"Vvc3{@dšB3J٪k&1*|i->?8 0xSΓML<5 ؤK"r)lRT!gL:TE>*UAhk@4MJWD(AF(l&8g?W"Z4c#V #Lh m!R:1[[|x^W >i<9:7H-y}6v=>+`R"DxG*1Ud@."Ƥ\≉1 -@+-8Bo ./XXF"CHb4DZ|J:8Sǩ` B.2t: B&^n.M;뿉s$WTO4Ɵsɼ@Y 1q q8db&J(`ؓ+g Z̥vI5'L":Mϳtѷnd<|o`Cu;o 10Kg@:~}v.ӳ;/.'w wóTCmחo .^1ԇlnGI6. ͺΓ33H\,& \'$tS/'.T5 mDW`fL$'fDg렑x GT`Ɣ9+5YٻpC${pnzͤjƎ[xz6xΞ3g0n!> }N>MGIr2>O8i9p0Mo7 .?f|=W/^<=W޽}NǗ. Nwm,٫i廓_٫>}ӿ.MOoܧ]7&/zw/z@~xg}ӻOo\~2@:9Qj2=66s5r䶕#s3ojv1lyrR~;|7CvVzX=I?zAЌx뮎߅v \!&. P՛{HO 7{޴S su?~{q ٌ'/z/FI6ɿ10n R@L ~ L>oz9>eyԯh80III1 ,@:d?=|H28Qb8{zӟ+ww}ÐOjx/3qT:trgZ۶їs{[:x?29sI&s>t4 :jmٕ<ΙwA2Adؑ@p.X,2U?/&bb\+Yy3 s6O\_9$U)K$aHSw!YCćdɒjH6a^KM~ߜ*`DZA*M{Wѵ~2Ͻ_<eTYjTk#KՀFl;['։uR6"ϝsȏ= K0H JC  psb y*dNBFb_6-LOVX߮N1VJR{p T_ \ T 4c@IvHf.ץ_~RK_5c\ `QB*)-L*2X̹H+jb_eb_XOPy*exD4ˉKE 5}!\[fVH,We2k -&!7q hgSc!e{90-WcRK+8b&KQNH 5Z\+b-8O%鰅OQB;V,(j04O3 d=qO >A?a7K`AQ}S,_ cypn2'5ĭӹ^*Db̝(|jٴA@hLפ=$mG:*Fq#Ƅl[f+3ȸ0-уE2%Phmb`"X+isaBs',(1K*3*R_" =p.ܩG3rsF6$~rVπ8דs m?̹w+_)LJaW Ja\"8Dd86#!Zà\Y3O _ -7eWÞ_&>K⏕|D(-d~\!7$Rqz;W7cHIuGg18'r5ʝ1|?PU4O-log3h GL8gk|{ˋ4[}>޺4m\Mw5HrF#OqIEJWjpo-%Eӛ8a0QQ~SpY09 IZB<_* jp}(#k_ &tBk=}Jθ(^7̎ F\Yx_ci7]Y潥Ԁϐx#9;C@<rMpo"!jVr$H궬<~Ni_TKR s6Qk0hJA,BJ%53ֽ s?m%Ւ(=J,Ѳuv?KT%$KKx% k{/i>}!)6Ɂw! yPXȅ +i׀5u&˦sY߆d3%^pvHÚǂ]q#\vGz+գ qi?Dyxǀ\O}4d~;K?_ܘ07z2rRK\KaŨVSP/m]^Jpa% W j|Hp՗?$5mˌW( uwitG7Sǰ TvzZ ьq 0&} 1h6o'r c=M{>{h8J4Ṽ(^^<\k+e^{f2z]hC_wxk@~IT" *Y|'CaFH.PX0EȅUN2TGnY,2:Wo[F˔5^e-ҍփ@"uDhr)$s4]d2;aN!aJ@pze^u 8ym)$R/}̹1!!|4C>Q}Ao?G8A E=JydDֽ5'(բG62hٺdCVJhuouhٺ5dCLb٣D. hٺd/QEָuOs{upz=a)8Ng:I:y6X: @ì^Z b?ܗxEh?/V%G:Qf8Is! ) ?+]j9v$)10 ɝILbř#h) F \5ktLU4zXJY5fr /(qE>.#0Zee<>Y1SvXrNߟ\jz;r𾫛1AD$$l_<(~u{y9]{V?=:^ktA┌Gfiu]ݩw'R@LBw+l_fЕgk_wx p'+j[WқɕҶ0vW KpAZ;_Ԁ1kO7pWɕ*3/0/^{/Ӌx)ahW\ջ[Wز`o⒕ Юtsw&wfVE Zʎ0RͰ-V6LzT0fR]Aڪi +XrTʮm(c"^LDGbeWD*r0ױ 튫m.hS7qI+1?\ϿT%F9#{ H۠F06i`٦6$ (xGװfe0ka V=Cp5v ~\C+ُq+յUwhDTwKͨb_=uswtO^jnFjtmrOW rnGS[Ln>ͿXqJhFX߂H(LדVEl7nfaor=y5dPe U@ղ Q%-^jJX#5R lBPw[Jīi@  'aB(W9̿spnĺ oBNBNBNBN ubHf8kSBaD-'BǍҤ1L/¥MSävz~xz>:A,Hž'w.F 10+*e򃫼ta4͢_X5 ?78 bwQtOŸHc]}lqe&3"VCh>RH?pbJrrxr?|$u. *\Nde!o7wZG3ԍ5=|Zdf,R~щ6F'\b绩E+`h&ϟ=ùɸ#DR$ 5 ǔhKlƱYP*b.&*8;h h4>AJV 7::O%~= c`oNǯ8vx;vys dSs9Z7#N'0ybSN^gTg.89ݱY _ť3޾ތ7dj8ʦ=t gSlg~xzws9"iWc}x`N·>u{;t6P"4̝8u.VبXnŒi\I4E!5U=%9g-i*y[*.w[Uf8 Fg-ٿ/;hb?ectB ,֊!p dI vΡLTI\Bj }[ԒS@:z?{6n1_/Hb(oH5ǞIU%2I];FX=z>7VJnԥz~ϹI:JB$Mg3{wZfȍOѫR0\:n?qo9O'H:ӡYIcv4~Q(x"m׍0`vxMVش6mey̹O(I>5`=3~pop!(Fc!!{+ xy!:RrZDwM3:R"K)F(d$Yu;L87NAp[fV4g|W7@IiiCտ+0\Tгk❄WuUۿvI*z;9ys- Xe`w7Aݥz%`VWtr@-gVvVYtqpVmnznu8XFf7r@Jm|)W4x(k'^,XNfЁr9ꗁ!+ !eC WX5o;.ovٲ͂gSV-l9b`b.CN$ 8HQbDէo.҈7&l15b2k0u[g.u<8%2cq_ 7,vN}BH0[J oJ5]0! zʱw/U'`0}æ1H3i^>PJ1Мb4VC+PBxgjp:+_CB hoq+Ǐu~L(BY$GRGDFv2]xki< h^GhOi_~]W"}??k3f.xA(`4'} \ (ip1\')4=PQN['>D!^hvnye՝XK WHs?nev u u1u;S@@n'r4GeI>J~g@bsȋlb,e&Fr}ay:Q?7WΉu!Y^һ{-͝s3TƽJ\eN<: M )lRB9ڸ#]c",zVS2I?i @RKU7-9v?-~EaJИF(ňK ЀW?6cv@֣FvYfj0(v NMOV/4Gq|VT{ lpYw6Ζ3\T:BI"aT'dd>_g(.'4bj w9: {4ľ^j 8!!lNI j}-~(]]GJ3[kHy%)>(h߽O`J/׋}aC t o9/wP Y̞(ci",""AANNQ[Pl)%{] ΛYqRzgwL)ػJC22Xv_Q+e+W1,ZൽJ6ɽ2r?^hu8)7Xa7W rO'ఁ1Da/`X*,pY/\~Ypų, Z7!3|UeRz뇰õlpx2{2#,0^/(haZJ/c֞f$ب}H:w<?M,mgD$i$2Ai*J^_l^[,2thu׳j9jw񧢽4y7s+*>R41$\frKj!_;i] lILq&0̢)eAUi23oyH+b3D)-ޓa +@a:Cq|N/񠟞?t<ێ  n,$ل=V:b!FCƅ% +Jb*=χ~LL P(88ˮU]otHH~1\z١ 9$[R DͧtQ0h-^u'Xmcny@ϱEp0Xv`l+S`H@1Eu܍Ǩ8fД{n GX@Tv%* BdI{AOp.r!5<͟WVš|NQ"D$fJ8ɴ:B2t||%9jomv3c_bxmN(/I!!Cµ3T$'c=bD䐋K02 xnf*y4ξ}sjy,ԌJXꊩpF-iPK}Ob,dACou".dn&Pa}Bo!,8k* 9Ĺy`r.aBhA`!K[?g=!q B\?ciz EDmn9vOS#Fa#H΂(AdrZέ;i],عePJR4F'~Qv% GDB%Z3x*8[ (g}\I9=Gϗ})a8#*.NQL@a9ISU|W+c3H);]fD[FO`9óW+򽗶ˀZI@=Y2 /onq+U~(κZjU_ŒB`kg^o ϓX'=]gO]6 @C;ľ^jAKDo^T/6!VPgwsVq//? 6Rx JX\0V!U㧿~,jbRRlj榢2f*KS"$"zድ j f-tx@0X͟ilxVq+V.g `i'fz&Is#$C+bz `О#XW{Gc YW{ YgJT&k7 hn]`N/E&;^Gb?ׂ^pP'VkNas`CEDsFs{Er_T=seo)/o6d;iCд hI 9|W":䈸T{y{I_g3{bk$ӑ F҅9gVsހ Yi`oHo!NB͸{bDZ$R)j4L+HHo̜ a QҮh1' 蜉S~H/;"/""{ֿXp(_gz%+;8q$X!>.'(\ǐV`!'f}p\t5ߦذ_~ zAu'!VV-`,S鷐NhB2 zM NG'f)9{cGL^L(_Ms(͖;//uDסPX(OIq9 d??[mVy\Gz3G}?;0Z)oһD/:3siY7Yi{있T eBN?`5"(Lc9D$ZT3n~F bh\f0L(CM!*Gʿ wFs33Ӈܖ(ԑ@FbD1QHF3Kw՘. m|S˜"B)\Uj)òK3X#,9f ^d@ NhNPP 9S1J3A&i0@ X*z4]~3$G%x2{2gy\-#M[L`'L{13mJ5UZd™iKRh,i(9,2TԳHhX4PACBJ2u i:ӖoӶRgjp~Q(bZaHBS-&>I7 eJ`$a湘bj\a'1ΰy4Ȓg؜R: sliENKPʕ]?V+ #_0_~{xJʟ>2 ϧ\V,ͱ0DB~A+# ) RB8(IRJO3<RLjލ9.* "G[֫wcFO w ]!}x1Y@tA=z*1AJ$c)%(PǬh sd"Te,q4Ыh6j}Tl~p$p dRG*!KSn"؋T['0]V=ͦ\ nI~'3B$G0e"Ld9,Uqb6"NJ̏0NF?̔CjQnF T+M:'%[&RHQn"EE8W ?͊Cض(?F7ϽQ]ks۶+5e/M3Mbi2^@K,$'~,ѺXERq$ $be=e`\Hc z6`:T}:4v }@3X*'*1 D%dk[6 r\A!8ܤ-Mv~igQ`3,]`,` ՛󇻏H*'NFӇr'=tݼׅ2o\QV'{0q%.SW:L/ =!Nv[gf@8buLXc_CpP C_S\2J( *7]ZyˣKHaQ"3'œ:FL-S-[SsW9Y@+\}WYTM?֖@-T&ʸ ɜ N y&Cα ?;a=/yAv 7d%wr7n9 #tvggGV`}A_uMkx0Dq1'ۭv{Rybkm=.3ZW@%4J@j.e~ru87\1D S^V|҆?\YAI+VҌagf]f ] (`kYd2YTL1>|p""m8{d2N:Zѱ ✚fh.~.}\ʗcmSJali`*fVZ}NiU+`XkRp$*Ո WZ܁sKoksͩvI=G'7SKq%Xǧq% 0HsXD)ɄM.auUo?}h={Qv*gڎpl7{`,seVt'L'ImWd,@|와ĝ޷<Ok*HĊ'A%P(l0f'~hxƂhC7ǢO@'txœǢrdi<42xT O{/xЅteÌu}gaܻ.ژmURSMԹ[E+_=!tqp⣉wEދ⾹k]V_HZVToo 5[ t5x{Eth$IG O@ "*q7cw\D  9\(D̄1ZT*BHkǁ/vAMJ]"wļ-rN8`Dʇ' @Вpc3J"1}ch}X z)$\R\u' $ i̱AJAd`R/QG14X?T[c-fr"1\MMZ`)4v%DLIz/[IQH4B ")R%*btmeXh$frPp2(]R+$St\a  48E&>hC^q2VRr-s&brkh(Q.\#˜4Ŋ4XF б c^(DB31Ŝa9L!ѱDR&cQuADկK(¼6I 'ٺaã49'Z~! ʗFHi)JK 6]Apf^wۡ~ٺ%t1 !X4\?^' t.qA-ƀG*_0Ja`d,Q@"BW}) j F4qeEY3)p'퀼&Ѽ-cFujB#KΔ&\9vϔeG )R/(KALV: Q p1z(:pdjƦ?Mxx6 SOZƷЬ:n#lڰ`-jN;h5Awzp0|+66OQqD R0dv$LU`A&>LB੠Mw!hEfO9#:q{gPE-q.R5b9 ,-`Pžj= ;T* E1iVcp6E!C9Tn㌇ɶi8lD yYҹ1b05\)^szP\YgDnWecRpcxfjUI/0ʥj-חͻN@?ٚO~^]$ƒ셦/߇ϗǫ<N(" 9ԝ*:@19k^8Y"6\2SN&( ᤁc[Vq.a>B зc2f=}kقmNzqJBhL6}I/jNڙUC|w;eFTފ[g #{jE*DF8R!er%ӫ7N|5ᨊJc7iݮXB@rG6Kq1ARxC3ʘhyl?z*(#8[=Kנ9sQ˜wlB-%IxR!7bk>Xc6a^8b ^CCk; FԗأԢö6"9b[9pqc)򔣴-Zn}ЎA D ^Ktm Lqܼ,,jSÜ9ݖtߐFj jX+D*(%|N`})TC;N2DBwҪds_M8B $[5%6eYl"8MX, WYV+?FtU\/>Ħ-غ;Ӗcqb9a|QZq01;C$bcf;g-榄w,NS VV|cAJ/B([!0m&3(6k%%wdpx GEq(7⁈t%S IcJ杹;Yߚ)V4K߀pCS4O>"B* 2ociN~;|뀨&1Vf?]?h&߾G`;?B|ۻOP^F[ei_ϻV$ݠ}$b*Ӿt?x@^|K&kǭVgI2wڇu]x6a80UNzv䥁 .Ct]Ut2ls߯~vOWi3CgBl5U)}w$>8 Uki`1>K?eyc$@qleN׺msc)ޥ}ǽOs#УK+^?nTEJNKlX.Ƞ\y߾[;3fv}9a܁ܛzC|ҲSkKa̸ Ik+.wl2\Wج 5YI `5gnj"N;m$T^mʭD%8Űc7[$#(vg'T0t*f0t5n^ۓr^h nq7=U3Jc+{ʵPI' M]dX UB/:%D܂?`iNֶQXޕJ.BC.UƊ*g7N(, 3:PmKTBX}RL^4*CAkγ[COp)1a(0(td8~,; ȗ:Ƒ*1H~ .kYs޺]DV=}83هs-UDch .o\?=_Dɰwp.ޞX\Q!qI'8͒mUDIѸA>ar/]\ܳW/<|;K3v 3wS0' $i޶mfB;oyPffU!Df`ugLd}>_FK.q)DNJ^J:YǧS>:|9GN_ -0Bw^ߤ\R ˄\/@ XBHB#̄ 4@O@_CEh"Ϗ޵5#<mTl:'Ifr*E,Mdّd(J6%EŚؖDͯF7f8ϑZ2(@AV lT ].;kA 0m/u>&{Ł}]'8j_kc=G("n~]蕒pu G7a?䣝d&"[|,yZ`^a7 igӏQa> Ϻ:T4.ʄ zE9 WsƂF1bS+O8K0CS ޚ3OIIw3S JOB XS~BCtiDqH^֪K.U4|4][;aWXr޹ǟ$$d䦿$?9y[f8&{ ]Fɺzb@JgZgʚعwݥGlY=WdN_^P`\5ޅ./h￶F C& lUk;W&!S'"嬻qyH,^(3.s̨ce)1ƥ*_TY5JC`[uA,Œ ܑ =@w5E*$񞩰pP]WOh+#;g# @q/ҥҎ9P a@\@mT֋t*WxAXRLht֕u XI1$!RIQUÓ n&(鳫CB)1fRz`B4&)8|$L8%U<ӡE*×sD8 9Pz*sa (rSfDʤ?]Q1G?tVGrl+o;Yڙuff\A4VAY YbCpȉLp (<` RJFJի#BEoi/adIFnOAT7-QAFAg#3xQq=0q% ?2'rM\ !ϴwbLF5\Q J5a!vp/ww]³檹y<*` -= 7)S8FD)eT]41-lH>KAXg'ý&M6l=Ncڨ:ZK) cug4|#X oF}[ /S 篈tH7k9qJY!/&(Z5DO&B E< +?|o~.ӧ͗@0d>WW.~u7_\GŏYܤxgៗgVcu? 䘊G|Z>l}xcBXSO$eMcC!)c˝Yc6ܰrDt`Im!{lMKA/Bqr+*no4 h9c%6j' `c9'GMFEN! Yq%k.QΪ)vP/u3'5N=FW.c;R5$5DKIktj?@mX%H|ԝاR1%mV'3DU!EoOݨ8d`BPO2Lu5>\4sYS5\smu|i؆;qO  D)ٯ(KoxBG*tz<Dnx8UFW̍&)DNF'&TCQOFdWzi38ěm' HNj`#K˹OL{ /v̝n5Ept*zl[;7>F% 4q-[cDa0ũo_9)-Y ejnW5 r 4a"kBt?aX%[DCgeG͚ќzƸ sf9CyFB8Sk6؝e$`zc$xr+xѫ&KV kIanbzB3_Q*rբ> (c.~aG9d bC\{RLedA"4q-]OU{Z'H|[;ev8SO@%@ȶ52َnMz qVn TQQanhcADv}F|k*~:aK#u=myR/<}Ep,h}[D\{##B$8^ZZY;]AFxMdUlRjϐQapD6IuKf@" q*[t*ONE/|8Kk q52ܠ`K͆_h2lfb)2_-׈pwto+,[+׌,θ*m#(1onOXdCYdu!+`ϖ`E+y?!db\LnE;z|M|s^o^޻7W_]JisY|Vs]~;PKB1vunA5 1TOA֗Y9)$^Y,.g"ԖJo8J(E,& bPRXt:-71V VT\D4+uRi"Üښ/C!B!3ePt&@+핒H$I* 359͂^r0:).3rpN|rW)._(ί|w^2Y%^ Uƕ. t\1pW$r眔-]n>d5@NlM9gXmt ѽ!?7xCF!_a=ڑvgΆnޢ<\\ámnC/"0h'A_!8?>Dt% {4K{ѿoQGxǛgv_Gz~d *w7ɹ}\)/x|}Oαռg`%|pLz2k㱓3mcs0V#1oǼٙ7; еH'+5eڳ:$,6,9N[U/XpM^ᐤ5gZ75l/BYft><*-Vk:}u}N.>~fMS6rV^C=߽I>Nq7v4#Jƌ1SJj2J=H)bW8}5& ݨ>{ZjqvIJA{f)'RU( Q+:j-RQQθ&4@PqIY{/`Z{ pr羫yF1+

VAF t-5.fØOŜXכMh=|L9c.u}6oklߕۜ6xy-(R8xG 0R)P)IĐ"' l_Hloߋ4 j.Tjw?"8 [Y!iY6Q1&ԏCv7/EN푖VH7HF rM&fLd*~ 4ױ_޿?DSCYԽtC R<~⏾__&JطZW McyL+tF:C:ױ<-/t ӨNИ7Wo| ޵Ig-5މ}xtӵ$ՠݵ@BPT:NkSjj]KHj1l6W.X:#6YA%ʳN~3J3r祢2բ} T Q#t2֓f;G5r(g%:W܉=NǞ_vV$][/+[C_(?Dzrm^JofR9qEI8jk?Um0hNǕ׋B9b -:'FT?ڂ\鵽ի bh]),/ AX_-o. @̮ܜFA^@_m^w 3agTVZu٣ս7>so˭g~$:3Ok1į.{\wg9:HbzDP0}iFBSVIkScMGYYUJ.mLB7)$:M-ކx\Q(&8Pm421WHǥ iזf'󵝨U2_ɏ`ɴrK9K-l-uYQ;%3Gb,'11@ $4Zܥ!Lj"FXSmn<>VZ)BHK =5~33#%.TqN:"&4]Ay BK%M{ 3n`zius:ƯpQ)GFpkS4M-Y6UNZɹ Z[*2E.H\+)keGe7~4Qrs?oMX5|mUjx@^9טY\ݴvql4~ e9wVcN>n$zIUk7xn/^o#Zܒ"LD!4%—PDԙ)c)|pIZdMR\ v Z\.jD5](ϛ޺Jo,)Syr>qB5ߺ4]2=Tp5(g}9I%cn0>MGݼTLhmjk!$gX/&}[aJe&<΂wFd->NYx1d9E_@HK~1젱=Ȣuk"oۭƨ ;hlGl1`r>]ײ5p1fru+잂˧WmWm)$M8 艣G |,{x/-CІ9I915$UZU99Ǫ["D1z4|{%:ouҀWpi iB^QmNHFKM!Cs-bY2u)X$>hoA)K%#EֹGnHr[z*:G)qd1^aE%" l[:X\; `O'>}O磚f~l۷kr a_4>nR\ĥ?cTh\?_C@o1Ed7`NNn݅~]וbtV۵z9c?ڦԛ`@ap}wϜiϓ[QK `rP8oHBGA^PY/ZoހG сaoh{<@}kp:+gf2?+8#+ҿew]ڱr8 -dN'TVaHŮt*2ʕJD1oAwJ˚=]iQkڶ4 h\dē۠0&-oYNmx(wE^褢2S_* |yICh5Q\x_ z .ݺw1DL JӃV.c}rȀO,=CN2ߓ4.¬,a"nY٨ӠՋa :CaH}Qt˱uVI܅]тP8$T ݛlS)^S^`YwMpYlMBwH@j|R>9@s %y nU uIN}x5~ZᗜBKB,/>=Yl\Lc' /xjr.7mFͷcá@8)jRq#LxaZuȄn[{`ϩlV寳Ka`x ٞ-`wC7VC@?LJ><ÐC@Dj-qL~I5wX>7TS!iDqRDW7zӭ~7Q f3-W7QW6C)F%$M9JhDQVkQ X$%7iP&ⴋ y}$fo c/ !LDTs(& 0@sbl>7(~WQTKثq{U:e4;^v-na4#DrT~KZ~N G X;&x0TBݽL҆Z C#$PͰ2ʜʟ@Q* aNSPk84I3bW8 Ŝ6e`{y증͒vy77{̨PRǻ~(w}Ę*a5'ؘB'1RC s4"ZE82(T:*#fl$ÂXr.A^^JլpT t "v:qbkHѱy"rU"FJYЌ[3j<*c4V5.1^V8~iFHLHGizxp%@T,Vk t:sK G5'\|>\0]/S5c g G ~~8HFlp>} /?FwW`:sƖ2$QG*$ws<>ztcM9 I_="T$%k-1Fpc\ЭxǸ_Vf|g4ubqA~@^54sؓzE&8|s!%k /0lgBLjPL"[}r_.'Qy>-|^XN dEO?N@2=In^ΘOn@T-^7 ^|^ Sh30ͅv4}*VCFn(Qhhx1 J F贡I<Bd1κi riMJZ6͍tDHc7 S$ 溳*ð r+x3ѭCizlsG\Pile4GX[JC$j@*zH垒^&G.'4\ەNAWD{ sA)%Ws'>(Q/ qT2!ʘ]Ih"6'a0)4z>?*!KCBo\ Ǎo1} M4BdqZgj7bѲ>>3 M֥l r ޙfoj.E8VXD.:r ѵ~o-EDgoE`o9+ `s!HQv3ǡ'YNe>xX&^ލg'5lĝ&^{L238ۡrV2ة7vNDC8E+-W'"[|Vg=)3TiyF $m{##4ɟ*)D@GB1ĩ<*xƺJqaql;s! ϶Yvt0zS* Qhi&D&`MRԲ<a w>O]Xr4 v!l^lo SnVb#=wΓ'w7@.dc-@]GyN[*"@;Xbx<1錞7/񲕷:fkyqU`k%( 0x4UnhSe-cB"rÌ0ؘ=#0pnե"e(Vc^]NR%40"ٌ7P&e3З)Ah43Փ3?+ [Hk "zBÆ,S`(:vG{dC;?Ũp`+hRq:TɃTˎjHFdZ}xH>O6^lcAR:(L~HБDrPlÎvywJ!py jҟdiq'"(eb dE{wF !u߻1bڙXY8sF0k5&4BYxNy|<Aڙ{,@] Ξc`!lKz#,""_ /~]z2n2WW9G^J}oU8_`x"S2hݫ ~=|O4]둃uZ]u[:1u57L*D]ܘ *@.b@P@L\y@GY'& i g!ƖǚkbHg"t3 Bi8n"Ę[\ XSeʁJd8Aĸ0ƕHՒXl.L!Kr  I fX 厐(V0J%VA!"VhEָM["S6Y'j  H(l1X!q솑pm ]G%)\#j}{6k&=UξBЫRoH&Nrda4K"yh=b$i߱lȞKoYfu̗FRrǗ_-!D.eЇˆ2n{q}Yw @,(L ~/NjOiht5th3'x:J~~ׯ]0ol$QzG:pŠ1{m<3M0#_ cB`}5wd%vk4cq 0@-irF-""KѲU@6p?"? m|Ww䷏1K? *Jj/FtnH0=Bp sLyG2°bLȆYtV @3bR\;-.0`⋍'sq??Xd*_$.qe^6(*.-(QQ ds2- &^A\xIiw:_ZgDw?lsu-Jׂgtr rB~U,pǙ%:}0N pY٨ 0̈K/e=g!$=Sh->\F~+"h}vz5En0=kyt lq oB^>o#i9ZbtN(ci!^q3 O7h1('6FِBZrT hݶV8E"EU "x^P6A^\xۤ6>N8GF׶t?擳OFm4#X vKːSoF,!(< 64)&naQ&d,j—2o\woa< ˁFO֚Nt2N 6DSsmu&+hzu "k"~"c,sqO3Qtp+L2ҺED'&wx8s7uj|݁/[󳓽P5_?_'p{9mkb|h^ 7pSܖSI-yF gֳYAYA LKu(hiА99q %Ƣh{AyF-(v9ڞۡ}d[9C{|~T>T?vͳRST.5jE& 9KwCݔ:u(ڰNd=[awUY>Jnd?hG=k5fX[Tհ4sxTq#xT:fנ!#OCê\ze.c c;찆X&J GC6Z;eC4n׷R܃nSO'di^|6KftLK:'RKs?$'E/ Puha`t![gהW[_}_%.eY`< ?{pr8h?GS IjF1V̻z5h2/Yك!S[r D fv2 x[TY  JJ3C},7_Zw_^|\ogѶB Ϳ?>}LAUh0{E5MPNY|u+~s~<],>^Wk@H;3+NwA̟Ӄ?_[s^:{qo"ll97? (9N_2Z;g$5;(_Lާ$s(DF@?Gʐ!ȢK[7y܆&|VMOLvsC N$y-.u)PJ5&1TIabI=ncwϚϚv>ع&6 *YZ)[ŒB IYeivz; I -,{hf`O)9-dmAk?;T;.ukOL3na Y ]k~|Yv1?1;-wJbΝ|%8URo~MXjg^}tqIY,uip_* 7]z|y7 ^}oG'y>F@Qoo|qe?\ϯf'īW3~CϜ]!) %[U|zoe9YV}^qZV#Lagϒ;D8A&ZUػ)lǻƅ\e n%L.F>%'nTPlTbӡx78|J%CE$R,[8%}[3೨[ַLZΫ|˕ADfKՄM&lj5aS v5PDfK&L "dUG%RM([,Y @Ѳ>~=Z9Kjƙ+ew du (Aav 7sz@Grg$ XA|s}_ezpv޸WOX__|GWCȃ{o`z [~~={>Wzr:Xcԡvd-%scx:~GȮ--H}_!unD~Gq$^[nzIgT7b8;iIEkAe !˞Q:ÁQ蘊j~p/u>7\whȨ(J6g첆,y^j+owBd:tb7uRК{|b {۠vYCJ?$L`S;S;8o[WFˆYT<=tqB)G*\0LPJmO𦡪˄puP{A)*I%dTK׷Qx<eցA=Ps)}f$Qd]XdJ%-dNI l \@J֠Ԙb"Lbe ܚTPq?c5*>cPc'( ytkHXRJ$+HX5]rUfL H:`X ~A㚀hȑk:th.i.DaU AzV`񐵐PTH:"Ԋ7>*)Gv<KiMk^ _Ik֌cghv|&Cp(S  ~0u[Psq!Ň hzЧ嚳~5kxIɝ4]Sp;iF'Y0A4Q r d8H30m%ņd9%z nLGӹ#zBB- ns@s<_puD)9fi93`d6$K>5>9lr=hgx%(28`TNLKESA#:<-:@t-bpFk"($!< 2dF0Z6UĵmX1B@Xn.ݘN2uN!S!0rCqQ!dLجAʼnd4@J $_YRKsj03>旳sN,cL/w!j0U v&HA5B!'`m`4 -QŇR"i:zU*%*!LVngRd"cƐ Ba~M=(F-")e |O#~ݕgKֱDck z1DHʴ{ Es#]o߳zylvGVYg5٢4E@FI9-SJG"EZơ3ssf3c\rv, -^yA:N;Ngh]mr |~0jM8~COΔYYG%qhU_^|] F_懋6?\9\P% ׆ ;Wf1itJMZ>}*{z6%/ڲHHCZmmsmk-,)wuJpi: C*dTK/^G]x\pcPݺr@B٪b4=3޻q!ln+%DƘ'}wr\MJp86Jd(`E&QWFjlsrЌX\ÿ>nR$*n?WS'l PyQ-PA`NZjwl(;I ZyC$I3F$ {Y%qܺܠ5]H-nK|0AX Zp"g!cɹa^RnɃO@8ZqV(jX2ȁfk`.U˜bH1I Fʕ#0\/1W,@roFG-S?>J*AHK^MZe,4zm>Ê||M>ʯJs7{i6 ?YZEJۓR*mnxQEroͳϧƍfyR'߫:mɳJܛaVi|\*k^6튈\!f'KH ;̂rxϜɝӚ`5R!"J !sY {BƗ l|O.o!>ܳ"/K>G5x9 3+^pf[8Cˌy3tg @T󌍍ewz߃mQڭOU#)qwrzo<ުdvjq wpBҮRdC5gka YQwZ:)&5)@sҷl!1F(λ+ّ~e[a6vӭ >o*`]#<#]#җݯ;+-v2[ȑ'}_ LE۟s&xJhۗJ{jA!w" 1}gt>(`u`XU| }o)9K&G,LƜ5458ùs9a=SkʝkhŨ1aU^̄ab8 )TAX-y0_iZ2Id 9r1^3 C]rrBAnݔ˾0f"3#Ϙ"SZpK b͘fᱵp)Lc /4;࿐ hI~/Њ oo)xASHۢpK.mN-)4a<,4b<"Q(kxo[?}7ϰd 71"]4ABi= O0)nB2{ a~]b\<{7 1F$"Oqh^lUv3f\rPV䥄.5ϕ/3ahÒA0Y pJz- yr}%W1ܸmwi(7v;; ? ~RE3NJ)KGtp)reԗNCKKb]ڛ!Ju/OC J+l|c"P㢥R(7BJ]9yh`8koGNLR4~Dhv :)?A-7AoDk8ڙrq7LUy޻&Y_Q0%u{pzoZvwW]Fs~N+h-+f '$MIQkGJ#m)7LS>fkq-~38j58ΰ3XYy b`205/`4^'tֽÌ6c Ҍ(vi_ V hσ!/ 5@TYgk%Bl瀙60lPE&aa E:O -(F19Ҥ g p2nB<؛%CeX 1Z{ wYj{@YC%zZX9& DA`5t= 7eVHe6&VO^Oc3j~$8 b+o< `TDskJD[#Ӟ x3E&'V{a1TϨK{s\v\ zoG_( Vfc B;Uw\+e Jh ԬSʶ6W7ɶRRV5)ᄶmJaRa}71)*kJP;X廻}ay7vpv>jˬpm.ьF}|`TpQ|xW6Q&HVO`6\ڹ6ȰMe+0Tm|\kuYm\ҶF%5zVk:^txvuBzV_O߹| =qY雞2%T!`坦VO.œjC1exmxt1 &'ϮGi"199)8;;'6%56I97{7 S^+ _R38֤f ˛*iAYTPv'旪\˛YkQ1Kմ'5(\kH}Tt5F)<}tRps/!^š-nk4O<ώCtCGFt>ڵ&3_{jcyF!?u(Vw9%w1J.FvBpT9c>>UzwOJѥYKX3D:OН"' J׷|](~4i:&uq(wݕC{a@╔yrUeAyRJ=+/ֺ߱Ws$E={#- ʱJ}!Y4#7)q{#::-$4HR8(oV:-b$@x5jc$*3m`jpӵqG%aX>l߽91$#9(PAYK ƭƝ[ySE}]Y0vxQʵS] eWH|d*Sf(s<旪|˛P9QJEgq|9IFZ4Ԛ qQo{D'}?Lڭ{1O]14 ߻(K̺ =F]^CNU°0!F! wFd7U/zFy}\\K^>cG !`%_ݹ7=sD܋ҝŽbEc`,wys\HR#) #BadY&xwW7ڨ'] B~ըu_߂5:.FF}r*#dk뛸a8Ud ;5 /8g}RzV}.\dZB ҫBdYs б$5,f @6sx]ؓe [S 1:v*]sd'Wx'$y&r.)pQR NW? -^% N1#ؘiRkÙ?+39%6u=I53-۳!CK5ih8?=ZاRiJ#VϹ"&&P%)ơ/"}\1+t9H=ћ/&qN-5JIg?08:o2?}R59!R # l/ gs r|{GQ5g'5ekMdX AUI$Cˉ;snMKi_obD)^ fZD)[e+_ L830iZ ekQSC9u 067Pa,- 91!1+,hŌB+>^YDIa˹p` G!ƙ Z`L5MТS}mmB(?ZpUBp V;-7#Ivg+Ccwֆ= M"/8,<"ȌٮD ‸h#cq^kCڽC>EҔdh5E,]5 u֌r)۫ȴ3ׁK"a^t{kI!!xzI`y;Cp}I=EkR:mTO5>AU %IY$ܠmJ !ZILU/5oE¬dn-6UOexQD;D PB|7]ujӨz#U ?jbҎwАPHor 5c)K0]N%jr̸(4h}t)yqTr CR#ڳ6 (5-h{-c3"{nL 6r.j`% B 9ByW˚TsBY5B#C89%KԢ*=Dr)&IbHj%cNq#yjяWY9 х( )h!9: fTztd$:DPSPD@GF<>|~%VGϛ'~w`WP['}xg!u7^߽=\p7>MMB翿;C:_,7L~Є~4!hBެV?seo7gn78xsgіJ=}xr/hs@lv?}_0TiTh3,w&ù*LDsfVA5IPBKjjUjuE5˕]3`_Ֆ:^۳U[ PR-+I\Ϲwr7G#i^ cBX~IfG1jmy&Ãt^m.(tvNRP`^j<[?=8/Uq- ()~ӀƐ5'+BwW" IIW#'wvŃ_\9wMuxnuF(<'(kkUyv5F`sGjK5iZ|ih Д\af((:pGF'N].`0_WL:T:ItJ]̸7pZ;U)tesvwU`ZߨD 9ۉJjcW ɻ-j810Z>3F4<{hwG1f5#ʁ)r`Fm:){)DKA Hǥ^t: _BTG r.;\^Lm{V^L;M`ɋi;{(ve;yt<塝+1Z0pg#ǩ@!M<__Iv}iBde;jg*{ GsE1NRJcd_) Xȩj\, qH؀P:Z7H<̠WLDv?ew01 \MOu|nC\W"B1\lǍwpjFRdPEGUrTt2ㅂA ͩ (OzQ)@yEfPitb=t@S.oޅvQr K"eb}Оj\4(%fW,8jJp,"Roƨ#L(waӦ&fPiW-[{8pr#mB@ P@Hw]1G0GҿY߼MvhBؓw0^AɣwYO2IL2(`9Σ{Cyu^>w.7ר' *wS^XQ(bCO;6 HVT  ^pvF*]b9`JyRO͐2ft_)bEi+V GFh6۩\KzBXA1ҩWtg.}Z.B3er:ݙM%Գ(.\Sbv3[@S,tߐYM53S rxĒ՜G)@Pb3)Ex';ӡTɓkx"=:2'.+uO}%Y(_)5Y"`Dt:Ti:&2VI;i _ ]tZ᛫TCI^O6'Un*5y qv_o33&!.<~ovk1Nޯyz]km\~1[,O=IbVG.CPݫ0kTyٳbstȪr_7%kI>u$+69iUbo'nw4nt_iڭ EHG@^}˕ 6VD {B+AV9tFԿM }.z*:{mQ޺.?}:m8TJK(z4Η wVEz]|)W"k3qS)ýXF6> *Z!{عŤH_EynzA4PQgBN ȼvw `L0On""R(okH״jzZ{KnKҟ*^GYst-ښZT$3k&⌲FmRͲ΀jz#AYYr-ƈoV{uNJ ƄY^-ٲG=|8M^?4Bn0!RdpgnmzޣiYnshb@۾YIZo'̹S:I{Lu>Y}aiW_lk[wК)Ǖ6܌qV-xQb` v]ߜwrA#y\mL?U?q677yq -cjf,Xm䳡۳;ͫ/rlhwL^n'_s3B9 FHt{t{/G [uiwo-)KIQ0t=b"(-Ǥߘ>g_v'.˯ucA"{Mj1hAٸ]7#Y.LZx8TR[=B?$IM=']jR-IA{<-S{%բr\V(K.@a*$FEuJ@v}Mv:5uڷSD8QT N@BKt\eT:g2I0<0 QI۾ZTNTO!{Tms*o8Z3Qm#˧"UK)5l$I#I7x+^bO8NeVvKCǹq^yӢНW+`H t" Y3@)32wp~_c> E<a~O }bJ"_͍\QY$,E!Ք/w"'!5MzgZӳ`\òՂ@v,Ip 5`d4@ÊY8DG$1'(Fd^-L9GA&EG 奨bgNbP]uLQcfwWQIu %\)̵],c^UoLy=ތϳw\Ќ32h ͖s^a#3]Ë 0CxT|.n i Cj.;^sڪs4s|qC8| 4ɜ`PhKzW8@ k^gYL\oL\~rjx2/7)F{lT.Y6;CX7 z 7A\~kנjϟ~ m 2Ƭm<;ȇkY=jgrej:>/>:`T9c*<SgoEUj3o7%SZט[ա')b_tϵP.\ںvMb#EH_y(AvW6>EcES3h>s**5(ЯcdRҿU e ? >'Y Z:J(UG5nz[p%fcmǝekg!.hBԘ 0,SYG8mČOKmS'L>ۯ?4:@OR7 z{ZZ<J[jͣBp69\ }Mm>cw&F ^"<~B5U1vܨj O'ݭ$$Av^q]#| joWi 7!)T[+7i&sb)WJrdO`Fʙ ]R}5VJ} x$*bJZ%nfTns1 a2M-ɰ>kafYõwH^l"7PdF\N#i tjϕ@f@.w D,NnHNwx+ZWGvu^hf\aA@J%ra}cb%U'į"L4W-1)QcJ+Q`jh4p cMKvRֵ!?+ y⥮.WoVf1u97Pkޓ/K¬9JC=gx0 }F,ش=~٭wѣű,ZiۄmSZEn\rpTd,L´#Hvd#ht7.ܺ ]݃x>v1?I3yX[pl 1$2k|V7szU+sog2g:+eG{dTS`38ϔ.+"a,r:{OIϫ2E{Dmsmc}/G۔i Ag5!'6.Zi¹*T\˜A=ӹʩA`ini>~݇=VE\yXw8"Q2qbAuVORR*r ` ց!Da\y 11/0ˈpN8p׶Zq,dO7s[q?:Ry9n%XWvP kCW_3*p+6zNTbbr! r5\685(AmRs-`XoOk̹Ed{@j-kpV(ՈKnmTk&$NdF%D6D8Lxfc1g3?a?($\`z_~F˽Y1 kR*CZhM O|>0aT8]ʽ >-٧g]KBMK'_c6ޕQp?$fεw6:^tB$m~\bjlFED}Qu;Į%Q_^-uH6-R %KFdž||%m44e5lSs7jkSIr:?_*M@˙@!Ch xd;J S5%Hq΁/IEkV?Dw<M0Ofg;&,#̑O//NRj¤N!KӺᲭ/E߶C++D̐kP!a^æYWF]cK?Y;yk()[#ӻt+kgO׊Ƥo|ߨbx!byW5 .~vuef5]w9J~uP,ӗis,BZrVLWEZGeoHs:;#5 ¯m! +Sj̃mK۪QlxU %`o*rg]Q"Ztm^秖 ^cU%y%Mwd1դ5LF0<-<)]o cFZ"H"voiI:+{.cɗgr[԰ei!I }! QEyTjVKN}߹OImXC)]m;Wv1m]Es46sV.\[dzP*X%rR5FD19)=ZOA`z[5@kԑZe^c֯ f݇3l\z fC'B"XZ<jTC Uu( CQԥmlkR]c1iwy9å|;rP>ݗHrXr{xc/{ߨeJ"ójqx} i5)}uSƓ]q(*XP竭-TgEnJHxBv<-aRCF6T0TyJRǾcϬ/Upn+~Jv]QVMRFY.S$w_ r)QdGwxDX֔XY%52S"Ue܊"1]6j@!Z ?M~SϽ(lFbHu5(gfI1  -sjbZTK\<ૉmSQf:=ޯ$ (P:Cz#EmVߏ FDKT.VoqV -~l ~*Q(6 z/#bѕ/;qK!hRG;FsJ.]vZgRRD)3v[b!R,#^jh'8fF*ajI!0q!3.MƘ mKTŠX`/i&q1a1Lדy&0 ,"BPmʬb!c d4(wM93CWJ&F-d΃RJd.vDr`ǹq[đpF^{5Bುy<;RGa0\y 11/`z `U@7ٻ6$W,%edfрbgF #O=(m`FT$"n[,VF~qEc)~8mBUgr*ΡIHE>bCjmR&i!-d43LhCy Uq)PF]6wJ{E|'P4 A{Ixi0ZУhmfZͿZIUkj선jIWA}|7fX6zG ٖ֠-+,Jַz[K;|ʛ^+p7~`V6+_Kh="c aX*0Β`ŕ8p9?Ǜɛ::{?/&.C`%)Ut+8pusw=](1y췋qnp?XyTgF)LB}!T/SPʊf-vװGwt(*+KuWVvO=An{oYw = JwnnX0iE!Fy 7@?cw;.?r|F" "s0C6tb3%=zы$ PܼhL `}%lJl+/LecL5a6SCojyk܍wC_7?YXcO-Evm [Nˤ vw@}m[D}OiHP։?ʰ>J4MTrv|k"{Ȝc!޻0uv-AvǑ.B9'+Z vL@nlYؗbo8=lDW~YN|XrA_2#څlŐ⚷kAb%A+f YXؕ!O%c|]P_G* 6CB)bmCTXmcE%MP0mkI;PWjP"0I0E*VYwgtfZ+,:g)-?wsk.kU/Ozp^H?"9BimCnhįs8Ruǘq$r&%* t+q x/ JEI1֬l3M mG(F~|rK1!N!/ҳ?I*m\km[ţηQC3Ⱥ79iky} h@8Eh2PEf'04 Ӗ5'\>shCh$}lᕵeWl~`'`F.:3pM@bPpLr"Rs^nx M d ;ὬʦR*J%k\a5I6`dmYfJ",G\? $Xp`CJdB]pui~|W<3FMRw֪U)e{N q2$S8si:if3R`{ u!-vutG۔$O>Wq8*X5%khDm5?ho{c6{ {"ӽ۪Lj4l;FzgM#S9^}1̙QwY(kz}%ۮv92g_hac\֌0/Inݧ@٘?ߧ0cK8GJ ײr9hyЗw7f|6p7dnJqp9ȭ'έ_[[][qn9,GH62i1i%)ƪ1ph9̫'ϵ?.L} [ϘGz1v3_an4i>aHV Xɱ5Yzq;.:{`O{Eֈj8BZۨsn59k~`''(FQDqvZc( 4*.{EW}E^,A3aočraI/s\қA9/b0YeD[1z9b#D\rMMI 8G+/"#Do 8LpHrȆ#Z:9n!J;hqOn>gnۆj.H΍Z ߢ)Q z0 9e@p$!4(jeJs}ܢ0nKj bbcB<(LR1 ׌; F*dtMe]m`C(nӆ ɬM PyFcH@RJ,%IeymyJGN);H/ i)[ɞ_Aoi"a :r~Uvn*}}-j>ιuC$^u >&>jKofuC,o>(Ƽ8g o *ƫ J9+|1dnL= G뾃JP9on =LsXz\֥0V.֊&ɹqq4w[>bZm >ui$r4On>X'x׾hJAVX)|X46C~s1~L|<>~yE> oR$ďZ^$F˝B{ CMLB*`7 U$-$H QdO$6[P Xpoх};I1=P+er2 *PH'tR"G1X0kRكLŌ|XԦv&%勚ngO{f3y? LoȬ0|ZR/-ivk<~9Q0>CZ:+5 0L㍊M1qjBkըcZb 61lG1Pk௣,E=Y܍?7$P ?i5δP~-^{]9-xĬ[i^7+,*۝it g*g"o4nU*C6]^Vu R[UZrIW5)î13:c2F{mZJ1At]@ &eQ%;“CR~i͚]& 0]C!JpJϘ\ST'y UubU4rd$IQZ+QD;L&U(TŔͤJLxT_6~ͮtZ,0& ogUw{8Oɇ\]HfF N8JV̮JU&.ARr1n"auՕS6PJL)3x$$m0$P)s!fZA])0RCtjok40.NyXkRL'aA /;R7Mx7 QkEH k \l,6 ɂKnZ7~0j+MrʎIÛn|u3W}QRRDdۤ$R NPJ*$U_H٦rYXz_+ yр ZD8)r J .:ٌr 7&r뒥*jLENJG[J  (1zm^8["BH^A OaLL6!pf~GR[~wA6 h Xm!nk[RCשj0#Fj{Q#YCgNa\NC^(1Rfzߖ;Gpz=ݸ*<6;w˺ͯ4JY;̷n9Jq0C.9ޢnxEܼEÛ]Pn0lkx Plw=}pf'Ζ;qTrIYBݵjv)38}?WA7B6.+؄u}'7o=!|HŒ2ˑՃ*|c|' Rc STT?/y&3) Ge9a0xٵ9]{f5o90(g!Ъ^'"={c(0hmt;fvڔ gZ{ܸ_"wW3|? .Yج ;~q0 l{ñ7dFꑺ%jy09bXU zѪ ]TXύ=e@7{@cM$jiziSr45c&.PzHٶ!&ѽ_j#H᜵U]Lha A*i̿TfQ#U],pOà(Y!vy̆ڽ@t@KY,|-d:xǶg <r7?v9Mk@0*ޱ<Eȃ.\tR~(ᐇl3*y3 ] 9<j K1LdT ezo['$.A`;lv@( .$jǜ#Z)]S; OZצ0$ {M!Qx6a]k rddWAiŽRaIkqZStq)R/N_X_Ww>+7Qͯ|x7 Dct}0՝/>ʛqF!p;h~uVPy3(G1z+Η&Q_93n-UC@㩝o[s˩v!gV!HV!9rV4fMpitI4Up95BailEkEJ)A]_f8h້ko/W/vO'#øI>&iD+\  %qS j"٩)xMfojC5b$҄tRGVsM><6ê.ѝX`GbY,`bb2'H%&aCTӭ2+/CU%@ Y֩rS5)4#L PJJNJ⌯$MaW,K< A 41N91O̸yi-u\IDJ]_槏@ASB˸&M}Ɩ~TXlepk@E{j6A:7a7YQ3~ vO:0nZf{4e »w"@fҤa[xnP (ʠuO9b| t1 =\Wrs>[ҭw[.8fTA;m~͘z(v]);%R'QH{Ww2h>Kt  N'VA8V F{DG\0e~" u}۾t3PTp(|D3%B[JA+1KnsM*%::\H-K%d@Tp=:Rl kt;! n" Ǝj` vOLJ"-0DkRb$RnSRkaGq0d.Moݴz!Ă5jfDG%oT$8pkbԎ}4b/u' l"!h/aGSk&Ub;߾/kTLk1sAgIJSTpBu#L0+ 0;+Bfyyb\˗AjNoGn l'l~TccF(btVڽߩ4 0f[sD{{M|9q7? ],>L IJ*dw7@WXtC%toʀtvݽ(Ha;3ue"Q5W%cQeK/d(* kTk} QV bY9SWJ(ZnauHHԕ}haEa%*;hHӈYxa;ue:Rd SRؖ '^ {u^_ڕXNtGzp/ptz^Mx9쓣!I7b17b{\![L:,\kfz*O]0HTHѶz dZ(mU#zOo6=]YQItl3*0.(a],< +yH p>3?ӹߛLfd. qX>`gl&e>z>}Grߡԉ~[,{r;䝞ٖsM"!Lvjhpr5AO>l}fՕ_Fȥ'%*?A2OhiHBbgo^~-UǾ>9NL Gw4T 7!!O\DO)!QhBK3H~tnv4W*SFXmܒͰֆ)@?NzAZJG*g3#J# F^0;ZS/ ;.vgs^$ r*P8뎪A`Njrrbre 5u/!I6Y+ٗN!CҠRRR){_PE)%?TR xTEJų{xpK?[1H[٬ #@G [aKt<sl/lMK"V/aYnwv󬼄ɓCMLAcec.L'yWj8Yo~}?Y^o};bP1Oad~ ܡQg> 8 Ou[xI炫8aţv: CZ<C8``Vێ$at-c&[$Wt>wf} T3/nןz|ׯAe@>Wc!\c8BJTdj}$@1hxDCM^ѳZhQ=1nZQ1cC]z~gQC&e˜aވ0Z<%Ӽ5NB)(XƃW׃ыM<toCP(&+db 2Mr~U7mm`Z2[W񝱛_GIe7#Ú֯In֯Ink܈8%ZpRRBnLJ,':q:.K{z1x|J*E/|8N_:ohjƻ$J#SpF ?$YkɭA|je__:\Xո>a fW1 a# ^\9)Yb*}ל-o ( xd@ x{)׮sԂJHx9PE($UPʷ甑y1M[p&"f-RPp+DjB(ru0N 8>.Xkx> W'52v<~8YQD#,:-],pꄴQUǴWZ{{LaD=-óԵs="ԙxVV퓝nU;RyC!ǹX M {a3'_ PIb6 B=D[0 AcQH׍t $ұBp f%5܀ `Krz -A1&\>#]m~NXzWR}=OyGfs!f ?j%eWwN`y3ѹAQF)>}˒K;' JǰO}*),s K5T`HVPy!vTxC|>O(T}{;]P:h1̇d0)=0D*}hZ;0;k85UpD<8Qi~Զ9I؄vA #=L# 7vl9dWΌJRyH|?|K`Ax'j 9^ISw¥yn[[4e3R ֬89\9_e3epo'bPvٌ;=h:-{ӲX?V ZBxi @5N= !"IECh.luҗ;.!͕~nZ?B.x"\ QR#/XPIn)pGF|W+r p 3ʫV3T)2 UߘՍX׷x+49o ;zcۙ7M{ ~78quǭI>$x_2 y}^'S>rb+sZݴ6g}^߷Sq|ݳnJ x 8tSr]=6$!O\Dcd Rjyk7Y[,!:$PmPք_Iޕm$RcwybgƂ5y!yj/wU7u$dm ԕEF~_qGt:-WB=dTJT ؀ӨS*=%Zr5eґ 'H~뮵uq"%GHtdvkbo/ RIig ZTHc Ʈ iKBd̋"+@' е6لJ"XΈyQLǶ&-0LS߱!:gX%X sB#vz"It9bPGNIOgKwZ5Y>Kh(f6c3G]E[$AT{+%ZPG1ԯG/j6dwxy` @#2xRX/AJ3j@mW/f«vuZ9l']P2 ښ7B[hXvޕ2 y0 ]xO#>Ӭ/LU5yjDx zV3yƗCsPi͹$C򃌗p&8?iigǶQTYy*H0硸( 7hTЅ?MgaßdeO!}ѧSH_h63ʥSRy9)ц3.w+0 ^:gNkvП%ooFCJn>J}ћ"!7g2/o<%b#b0o&yIg!$u0%%-+ /V@\!*V\>T[G`9c9AsEAC&υÂxVp)LZ:(d=2ż2R3e_l (I0aO+ `~:RN=F({t+uyx1C8;2|o%Ҵe'Qʺ,Z*gWCD1_`}DΣo|prHYEE8}xe=moo%z”=~lsD~bU1Q,tHSpks8Hv)繈 VJv_W%;P(&âٝT{iv;Ҩ@+y5A1[/)C"y9)lN1$~Р ݜֳXqyzz?>|.EÑN..䏗7rpUr杗Xq#.xKI1#"C˼!T+dʜ /6;w `ӪayrbG8B  [aHb3r1WN;%Wu٢̓$Jzr 11Pq<c%xmxhBx1dN`I1aU/e%+>)gJRx; L4wCZ:(ERP{iΏDN6[mK-ܝ\1^93_U|h,h/lN7t<e9Pr/yf Jgޱ2yF ΕA=ȕû^!"NKZƺ&N{>h9|ٿ.>}HKDPQC캱'c|i"jB)6HXUw Bt3u*;uikťKV.Q]o}e.uTqoR&w5di푡rޟHsK'Md޺w.ђHFu5.?募iVyW`l)?:r ETlU <2os/?#*"M)t~t-OKW?Ƒ}Vڜ[ɹJ Q"XWBC)Xm_ԂOJtJ\aܠ۸_*V8G9B^)Ux/rO (0yT+T6!\9zbsDv?S`w!Qx-*LpdB.DRdF)4J)8QT\lH37 SÚĞXœAaJ(a*(7T\9ɽ@SO.J*'%1$:2F6sP.)g[.B![Sc…{dÈy/=# :yVaZ`|8z&EG&7?lQɠL yPy=)[GF.-@} ʍ% 0k)XrRbMD. ֮Ꝏ&wKL3f5{{׼s3ٺtCkn;ń;ED&SL$j)젇;NPӥnHKTuCi<=| ROGPyPҿ/"`zduuruy//qEK 2'-?4x Z[} O%4o/x'@nn&wW)#ޔ߹ɧHw`tw타{d+!~X{t{'"mG?t㾇&OB GkVq%Lv;>h'7U}Տw}\` <;Tkv|N\jF`?2y-5cHRՏOwIT"d>eNj̳NF}WM ?ع]2`~Jো7|RR߱h߄{7-~Lg9N9Cw끹nrO^PҔ_p#G_v kg!4**C]wW7 b3tRQgu;>tŜzuKx\!/|6S"],+\ {̍AVyҗ;{_)-jx`Q?6H52 A\y^(p݉"ŝ"_E|jmb-~q ǨƇׯ?&iŭ"=C kѕ`d_ۛV݇gIV\8ߩVO0Jү{1w: VnkW;AcVt|uye1!.8!W{,cO턓,Rrey :-w>.Dݺy\?~t05ݭ. )$*[44z-PQ/ˬdpDtp|QYjg?1 w4zIJ[CWEhm[F_SNFrb*h!%ALA tʝTJ#y Q[Y9^_tE[^?~D9ӚCtńOʼnG]]}FS4G)umւG]1{r2E+=Y0IUj甕)WUu3K+*ϣc /x ;!?("\fk..d^"%c)R%eUݷWk1=vs^u5DT>Wͯڋo<F`ky-7bS䊚=UaTDi̅BRdɜdk"Rh1VXUC3nnP)'fȦ[%b7N02\HB,נ-Sx-r1E極kxH!e;fjE\r1CZ z.DrJsTPiiS%TJybPW 9ф"Ź.@Guv!ZKsvk\1ށpĘ `m>#:֌1.؄b<6&3U Ɖ5{ʴXO@bɱ T1fcj0))r"XNH)pPg%(<¼e+=-% -í&`QHhjr,D70Y `"]Z8M Z@a-eƗ-TmEs#[Ι=NQ>FO=>ݭzê722(Uj4(_Czj%0}Ig$9(Fѳ<``^8YӜTFYˑYD^NAER4 \XO4;C-4=cQZG 'R9A'qCsyrS PXA0d؊$29A)BP{<ב1YJ.@S<#&K s,A0@Z$bMbPM-N֦PKʢ<A3HZA5lsHѩj"נtT89-i=8+ҚPU23b5?۲,Op^*n֣]7nNk9d%%jxH "H`%qouOPP? jbաTN F  O8ABK0Ab+Yg(>DhS0i"C!bZ- Q5˦`9uE18sxQ2bW?Wr,뱥[+ [~ ؒ\cS:{i6bHa׆y{'pѹB]FֽUgnf};-;"<#1Ȑװv]][!kwy׫]&Ǚ[7[q3t @b%fI6&2\7ㇿ}BE~ۗM6l[i涇+k ŽÙ Yۇ%ab tUwNfBQzM .<iOF,Յ"ѥK&8.a1zu(&c T`l<2KL֡D"„йFpYKd5ta7GT9s`T m4+q 0AJ^3L)xbg?Pa}8}xs/:P);bq6zwU}wni<%c;bv"$Vdk" fb>ʌ8Ke@+Sረ$!RWh  0r9ڢ Pk3,:Vd@+SZD{?ȭ~s5鑁儞(9%y7t*d`Yṋd嫹̟e\„H/3#E$I {fwZ^4KW7ͳ ;-ݩȣTsՆsYjRBrvoS7{#Uc? ou@1(pH:QpYRpa:wk꜖llWp8j7@pKYWR$"dr%EA3xrj}ʹ]椓ZҢ-0Q@HGL=nlؽd݇u@2u@{Q/@SŴC)U"ٻz{ZM)!}\,pWIu؆Osƿ/men]u;B Q!cAf`m|b) q h3g\qsL KJT;xqBj.u+OX~3,ly^:32{ə=aa<]CxBMZ;iJ]{G4UWKRkcݪyAt~50xRيj\xf E!hogvY{Aby"YK7 6ań&^3Eh+J4oV۾|,:mSQ z+!3cXSM "AeFH7hI_hao]s:~)41(pYx GHBK:\?ۅg9mpغiw 4Wbc ,, géU,TȲcJk C1sH es<-hV}z6Ocmno=ӣs"k~izY;x>Vc}k <c1:9>n,)]oFI&d 3UV{}y.gwMZeq( ~gnͯ+3w”^gO+{@\?>Vz7_J|X&sD`*?~sۻU?jFGx^}:ꟹзwbJ//cd`mF8P ?c!N R+y̔~#MU%58|Ӂg] ĉ`{1an=aB"@s7G%}@$+3DīQi3=Intl6f8EQDhC`Iz*̲YtI"JhFm|Ŕ72X>X3y̖9Q$[9h}a\fI溒O3cw*NfT[t*G`vey&<" PWMZwme9 A^i£Aк3t `4 @w ֮xt=+ݔ2#J)JQ*"9Wbl!iR#iOGYΤG3%$qp&~ƀ:!)1OxK5:ݨΧ) ,myL|밵Ċ&obtUxӫ_"y7e&f[O%ӫ$"* $mùK)q'qYzp61bG?ҠSsopÄ{$㪞M J\(:3 tl-:{ńgh~QWDl ˇH4.F ȁMUs ;9aS(G5ƭCT;SFznAB\u, OEDh%㺓EZ5v8.w!^=qУGX~k[1yG0~t$p2s?[#D=kPSB_O/1HJ=.P_V>K*557 qJ1& `]@1BHuXofu aV# j*~o43`r?r)Drd쟞ZYjQgEuVoQum8E8lTDodAht0s'p1lm/j kCf6[@15vGN,h|Ҏ":Zg@~[*@VgWٵ|zt987ǫWux2+H YMJ `3OnI!0ƀq,QVb3IQ hXBŮ">HcqfɅƪҐ4ƥ?0@M$RнBy6NƮac3s4`#%{JBY(EZP [G,*şnx\\}$V3-OfI%PGid:=!*Q;?;_隰6cXH_^^yof1S4Ƭ:߹Y]|N./8k'ۋŪT8.?Mr~-~$p=+uE/I}~^71+KS?=6A5Wd*|2HJ {5YK*RIyYު∰ a1ٔ-C 6Jl$.<8N2\Ip8X0Y 8_! )ODRK q&E dyrb0ŠU"cT"B"qU+m|( xf1V[y; ] k"5Cur\!9VO:cgOI8%X~=_.翰_Jձ8r1QOܚk N * C XB@6*ہZEligSP6NDZb!* sbk1H0#2t-u^31|XmY AkW p}+zL12B`3 1Ҍ'~5-?̩{H DѧKXSĐ { )Qb+ ,h(}|i[{9m*V1x!`IJ'4ePo 0lč/(xy+x/1< c,ш=P JĬ AE#s%0E51+G֦([:`̋]ֲ$T|4&aHhfeƟ0o!tR}?rgr,.@TR!&' Rq/ ,ıᡀH7h6"و騑W4 *ӪdѶs{02(c m7q8r!8cąS1Vr9D󺶡m\B@:ĩ!2( iwmm8_\:U{Nr)(rn @Cbp(vɶ4h_wSo8Q0*Q#ruZKU1J 敨gNEG楄axN 9hVٮV$v ,Ȅ<> T12m GP-DM9H;.%4i tyx-qIP< =qDCz\nSXF+>LMeo@R4}#櫖g#@=GdUC5TYӘ'B 穦婦ܡ%O3G`-XHkj`?QkQǔ@6tv{]qc2dgRM;*eܱ pXC#Z3TA\8ٕPJ C<zܒnp2*{Rh:W*A7|:.maƛR4&%-iK&ep'0GcCZZݘykJ5ҭ0;))r.|#syKn(W~6VqP?SJZph'p %saM]aelW!Tkѭo.VёPw*8"A0?w]rZT1s]$(wl:I+=wLbt~ɔ[ʬt~!=Ҩ7t0c5Ρv5T})i{~IP6*'8?+* y"&\ٻ Y̓ȧ$:Ajr_8#l-GVH;l4Tl|-ίnn}w+7pq*1L[8_\'U_)vM._/ [G/m-$\׶LjU7\8]/zH6> ڇ^;jh&̓#,}: gry۞瑉[y&?\ss]t[qXeb_Ye7|M"n/A:Htfwd) #=2}ZMMч0YMW#M&yYI}RPU:dσ5є佛 Vm־xѲ?&峤z3 nLkT $;dIdHk e7f9$ǜT bPUӚVT h-Cc-h ')7mj\ wy~$ǣu a#b{zѢ{Aw{[BpÒ^ߞ" r]|mڕ/Dx%@}o2xDX~Sw!KQ̇X ,_P̤fIȅ⺰PepBt5$\?o &X V9RmKgT~nƵ hӶIv(&Tnq]>ou`>MA 7VxU,+Fz#ݽi0# ܵ]a`vؽD۬~th v+4A@Cxk#HݖE HuDylxqnfvmnY`H b ^6va3WcaVSgA=iIZ1p}w9MM|qqC,aNr@Iu$jIL,<~oyƱCΝ9 KVLy 8n2wӮ]2 1xFMo)l No ).כ=lL * 1 i=m5EH`2Jo}[Y({`-B$:h맘9\3,>kn# N+>0LN#re?hd_]j,8{.x ]DKP/'}xކ=Y}aRojA5 UU]Td_݇׭j̿>;?|5Tdi>K6qaYxy]Eٯg]C>{|9\}ߞz{Ӳev;ۅ,L)>~rήfٔRM\nE>-}Fv|ӠӨh-uw+a!'nY6%5ktݻIr1H1gtn:yޭfTtgu念p@A>w;a?4ۻehw+a!'n۔q풾>زYvLdzT0hdN@dػTōiEa3@`sT}/ vi!ϑkPŐ>O]$PM*y}WNԮ7Fɔ|K9cbfG8骙USK(O8vG]7 C϶پX.ֳTW#fkκPl ]SX~ 櫪P_-\OfWwa|!vt`>7?kx:db IU`v9AȺ\RNO7Wi J0"Qn뭹˝ yJV$Ei2 T#/]ΐ҄;LE8ϮU0i"SJIxJ*DkF2cAN;b Ł)wk6Ujg_BQ/jR?J}c/рJQ̟n9~x[}?ݻt~}ž.{<&N$qOr1Y5"!koByﵼ,s.?C ۴fw_|6_\__,EX$}|vk&Q N{hq6xX3[M{1:y~A#~k/{w^DbX k<1ˊs΁噶'o !EaTOױ2/^⼰t)d+SB\>H0>|ُDcL!zޞhs81W L<sk.âu]| ])}@s7W?tYO*mk,Atjj&jȄ9j֒Q,!]Z[I 2BYsIpyZ- kH5E\^|s??\| B[iple{.l6޿nv Bl~Ϧ_˧?e,iTTŇ?qusHT!TkGu0ûZ+@V{A"E-o4cˬ3MkMEFÏsFjܻOSQF1ZpwO*u0;{z*;2 eG&$eGb!8idzDj0oI$Fp & 3F4=>(=>|0K3]^B(wv\c=*e~RYbp)*mo\l#foE[cd&k0=PʋXY8~x6 u&.$euY;i 9#(džy!8Z M0n6 3MCgeߘLb+w-X$J(#Bc7Kw-ՄT6Ғ 9&Z ngln}w:L:)->/ 1I]VWW\1g[$*i{YΈs4v wZvU+-ê PBaJ(LB I\Bae nAE`ʝҞ6|Q1`Lc+L8t$* Ro!huF]>(SW;O isi!mZ):yfx'`jg(&*ɕ!AQoH<&g6 |oJh b*_鉺VZkUU1u-KZ+uALdlU)(1'yrHPʅ>R,E))PNwI4xIX  eqI[`VJ' 3ZZGmݕX^| ^j kX(R GC$VoXI%jmp^`FO}|5(R`5PdUl= fYE䘠[TG Y&UJrYXiKc3WKiG n-% >y) Ks \eo;r`Өrd{e2{$7BJ˫/b5x/<#E4ȧ~.`:ŗ~\5 *nGF_ya:[j⯏ wS\\ a'̵ξ^g[[>I6F`Լ|!Vaj{_.Ud]~Jlg3]$-[HbKmvb5B)6䏭x8Z0tQ:y&$@4](~%gk@%{jGAX<9ZF`Eڒ).hp68E! /ɞEIH*s^B8o`ZQ0șCTYAY^[7@^e5 @D xb>*[줶Pq|h,e<J"x/V"`,-}(N66Z\ZpLCZ8.xmE8ׂeN(?Cd%%yYy$rMN=5WFsy<\Q@T7+8 a3%( KM-BuS@!DPmIFh,B;kE5Pz(B _+(V[3/ ږYRpsqp8Ð *tN)O!$u:.H掙#kK=2ƈd'gB(5ɀ.V*,r iXFҸJE.튴oa)f;<+td ^şѭp|~i:}{U?z~7Xon=շe ogH`(d֚BAI.ANr(B)Z6p of)joY„1$yVE/$ Ӗ%pIrS-fbeRhegi8e !bVzCi (X)wh-3B /%+- ouk(R\&Zx yd #K2Α$)N @!RLK{$6V3Χ #~\"ɱpL em_eݍvsfڛ)%t]O?4᜻{٦o E*ʀ(U פ ^j6dk9Zoň8mxfH3cK72c; gR @K\nݠi5W'rrp9ۺ4,$h$ 9'>њOeԜi_YX7 X^'bZ* "LKqd4AH.ieX؁3R$IΈR P^0cu3YhJB9#9RTurfR "gRp)Pש2ΌagUnPɑ)t8NJY(h9X1QRN7ԈL^3 ;[:WFԊ&.⦓㊵ X_(.n[tk FPti&̏}:kG`4U!-Ί!SI9}2†+9}Y_N&W33(JMVU>s4'mmm_n?lߩM9 O*xf ~j wLrf0ONnK)SDڼV! -!w _g{P.'>gL+[U|ؼ6,Y+--7 7h{Ϸݾu1{wvQ}?]u7d{ցͦsqSm4xw;+!;C|V_5daEeyJlV|hF^<::#khx$zfՕ6ԗl&~ĺt7RbhROݧ-,WwlO%Na߭#yڳ".o ѷ/rᵯ?qӍP&mG.,W?PΥwV$;wk;}Y0 OJ~}{4&9;>HO >*ܜ^$!_fɔGJsjL ZN;:i3YnQ[փ|"#SKg"1_W;J//W&R91}=΅ ;[vUGLxKL TVy ebS6ǿIpϾ䧑!YBT[orG@1Mϊd!}{jBUڪ';?n3pgn~1ޮ8c4زKsF~~2[&iwWy$3sҪ;FkC]M+Fy6`~}%1m<?]O7VOrCjO#w3_~ $VO^|NնSh`FbxT8'g_7r2 PmۿhυFE*gpALm۫0FWViy>j3_=W@O=~\jAMt4탳Tġɽ[}x{{xssl_&mR2f'/Mphж[#Zc} (~-@>ihIX GeXImVy6xVaDRJ kKd)15ۂVB!u)R6Qfb'P^`\ 8#Q!ҿ>܁* YN8]FWpI1 }RS^B(ٮKH9R X8*\I\VWBxiIF%W`=-X8&0Q昅<4*1_"h} >vm {i4\F\ 1 nfPAUP my ʼn5r@jd3ۯ@861hF dH) 6㙅Q&WL ]' d 4y*=a[7+ԇ=%l)aVf{*H97.U`(-҄  $?,PXCBj)b 1. E9mKEY*t!7oi׊dxhtUKTJ.‚YӦd7CnN(y_+@چSu{vo?Dw{*%zfGn$֐^;}iEJǦLY }m؄[C+=Ym/xp΅$߯i<Ҷ[Ar}v>:r]f+pեb!XtR:pə :!N+W_)7V-F;C 77kF x)o3Fe9;|iw2|_zlSY:-r5__qҵo4J 0iI vQ}tҠEoA3>qBSvԣm7dl5'rCr  `K,p e !e7A",8A5P=^54m*6sT2l r1h ]1/%(t#eӍ>$ 7G Y%-6..ri쉶n}H. qМ~lJI"i/[O/~)͵ Z~|mdcsg`1_N47]!Ur_ v 'c/h|2$_e|#X0D%pw+wWf1 ̈́mm;y{fy C+ػi'Sᅲ~%E)y;vbZQ㫁 7fuڀb "-NZW;uM[%TjH0XR:`ֺPOP<zKZ0B /뀋h(z|=3jѼf5IӊˋB+[%E@q_:pJ#QKpZ8bDNA+pVTI!#JՍ^9T#jA猸9^z9W^RHhJ1WtZ$&yO UaI U`SR(jK~)yhdBi#Ǔ o:ӡXRD0G$mA:U \ҷ\{h#][oG+^vsErİYC鱹+Q\rbzHI{(10bTU]]]$Pw7W]a&C?ZjQt8 JC(B M] E}d--~*y"RJ&ugƤӇd|Zn 'k"u[3/ki"+9r͝{NN]wo_9?MxplqȖ]3s!O%u9̢gk詶ESR`Y*5OƋ8v0Ɏ6w$];X᥾ױu޹G¯6Pq|ݽ>7(g;̌b=Y~ecRjRF(z>.jIbrj7f-oS2I.MW TΏӇ1hчb@޸33Acժ,PE k Y#n:J}uʱA UD1\S!9=&RxG D;Gx*=(q QꐐW.12Q-WoCAľvDJMk8@Vr)*O-F >_J^q[j%yO8 54uF{s \[GukǍ)>Xa"5mX6,l ,"uSH!9GYv6ο: =|X5R$CXȡ\42y 3N6ENKDVyR61;*:qa3Sv7Ē=8=`ʈ>DɜkX9Z)qʆhȘfC lim~2~fp?.N9Npݕ(\DT=vqhCGܔ ׋elMpcI;&qˀaE'vtwjiP5Q%lဢ~YqD*L@期M%o h'jU\ͳ{^ zBfD:Thek ) Q;PZ٘^Ü216^q^xd.M\tp?~.~v&%SƧ/vl.Wt!ZA.W< ԯзj.b ƾj cmD;U݌mg/, Iý6(NaBz-"Q۪O\7/YQ}%@R,i$q KDL*Շ_o3@be#gTd\ߴzz=,#U1błwE=7)]&ŏ+);+?L&#pm c)FNM(XC?A^H=5 H @#-]HwVVAPLӭ&g:iN:MAzU*c׽).dO%M;\޹ggh<Nՠ q?9?e!p[|.xr94S3C3e. 9ܫH|fEEw%x=üٗo.?;tjZv {i^>D.ؑ?Έ5Ǥ s-] 6t6a`#ư v(j*=/qx:@۪|ӏʃ6F!kTCmʟP9ީG78 ۗ;q7l9j4u{OT% hŠ!/NOXw11vv7^ Is?5zAD~tja=uLX 5ߞnulP. ~Y mO0\h|;:ީσDeϊzC_ExRMŊ[^U?Ӻ_~H-'6wPU^I4#gfRshL蹘j='O ( Qw5W~vL'vs[ӴQ"aY2Ik d]h2a@)c(⊛4 Ffe5f;+[T3at%c12%;D&+ CB^n )ܱšFVbJLfF(GBdr8_fT=-Ƙ~r8Cw3-j9g!ZG '{Nꤹ*{5ƥ&2ޤKQNkm%dA3^gLeKs{3jΩIHJ$m3XV$ꮎdgVYebFu6.ÏfY6wמ!2Z$"si /g5>* 8Rͤ|X#9ɕ!C'>$m IN1N< M(QDb ;4a@ƶ\ \ P^g&wQAyG!_׋j/|%-U={ +zSXRp3(>\՘Y=x{)z5 _} _>:wQWl*ĺirb7Y_ghu r6{_{[A7 K,IXX)5M՜R[B0jOh`@ 0x%0QL^=6TضQTd4x5bʨ3e\(+T5P͸V278eZd&§0TT*D's $Nk4Er!8+L-8fXhrfSbl*I"{L%ňF%*%E:TiQ1/)qu .k] ep)5V8hL4v0DޤS6G %wx/N0ab2&``XF1a)3b?Tߎ o'9s`#cŎZ⊵k5^c1Ս}Gtj/<L'CB^ȔZEtK<-щ}GAGn[EtCh#D4ɾ3c1Ȗ^UwsJn 0|Px/$odA__ }=8y67gqM[uJ^0I̭uBYHقـ@UhTw/S.Y$xVҡA6C<q;V ONxN1]a; bi*`j;)dR~$WcU}Ky@cqK/}r8!_xr$z[]qA J=ޕvw #T+Kw JRo7`F@NOHbTĬ HN%PK-NLMYn9al-XQw/kF`,3'\d 3P e ;!! 41V3Ec?)EBo r,r4ۭ^ bN[0Woi Rh Tp5(To (*rz3U;JF{vk&!eS#r eJҩFT(%h4̞sH޹ߟb O{s#>ea1͆l99V\;i4Ὕ?.j3 a1}b1_]^b4"XkutaEtOԞZ;')p[]Axʨheӓ[JL  {|B9얲,t$< )=3ԫ/}@VClg4UذM|U#6K2j^D Z*[v%).[[+ŭiɻFi`!wɤ"6V%1aխZC#H-%Q(B#~N*MsɔQ :YTkaub2sN-II1IM-'j^tqj4ѾuՒɵw^sy~N!u͵^륐szJq]o'l*y<;9&c ٿ9Lf'mԪ, jfjgaAGt"ludO8ο13hF~!UVݢuD%!5')CI!#I,1yf23cq;e3j4kC$bQ3*b&^-qlțg֩g8R@A92\̒5r)]1'C ylj3r:l_q&Bޔz5 @Jj o5_E ,P5q6_X@zYS } ΫPU&1]7g#k} 3sK8᠐X F b#Zz"ȶGp Sf3|$ "جԺ@fLȝJ 6 $B7 H&iAX:"&Ex19ًI\ĥ"X5tZrKOdUj [1+ui\У/:fXH̜amty|D~c2<CΣCݮLXAJ=(B_Ȱ:;d% Um ƞ(0iV,pOihy6Vaz>W ~먜gnɮO^̟I3y y2)+suR˺XU]h+iwΗIpydҴ4MZ(KH.AEC@`f\"zg- \VFWNb']v4A)M)yKl|q (S.Y6ьސSq{zCPO5vClUH߀R1 U pOa k -V\KjFeÄR 63jj7.S4e %AK1,B)H-  ;x\h *Eenh##9hgQ9B,ŮRnf.J2E2.jiیj#:pr( CrK9P2{LdL xhxM| ֺ C 5iD)PA92@ɮQn0Bs}וZT9Q%Ra5r!Y0TBiY@d b6n&! JXaMT3ֹQ 5ʑY8C zZRRk&0.3 #1PB0d&\r T Ѭt<@HqNQʔDa(WJWzHaI/5 Pᐵ*[ܮ ̼[&\fT>XP2:Eo(!FDATr辢!Kբ HhMsoK}FF g̔PJpp_߶ Lv A@@+M: ~k (m"oK@PI'jXG7?7Vk}FlqC2ˁ*z U2/ GC֟ a0B quA)%.>A%snݓ`m‡|)kG[ux!ELJ JqƥN\p!}oeڭK _ߨUpa|aPYhqp1_/T6xHc5dkװ6t,vahM}jk{E޺m1.AG-s5oJ&ԟ7_^ݹhI}JI*t9̹ٷwepOg$!"Í.\RtH#ѽ;0"owx9%8d[trB#ۤOp5-.-XEǠ _wv+ؿ{`_=AI O7>h"@DeqԇMm{M6yE᠅SdkDŽ0|! <~ 9RH` JAYBiQK^sBY*97[{فbxۙqOx}v|d7R]!U1 ħor!Q rU玍AtƣA;DiRsxG BkIw&aUBF:c?_j jŹH~ؓLv&2@\/sLOvǓ\ }؎)V{8o?}WF&ErT.ּֿQ(5*wTI5 {Qy﫢Za@V{" {闪0%5zڪiY>V=jkԮ"G󳵰[J\dĠΚ/f;b6s7xus˳xgarz?k*vFIAbIyZYJ9Wԍ&-'!/'@dzH ?^8/`=[j ']43ΪS~y.D͆K؃zoFo~߬C?'`x/v>0Y8s=]`}w5(ޚ_ *yʿ5&cIVk蟿$ʡj` ^|uG4yz@׎ #kSҍ4v7[N#A.P#c6ŘAaF@=$m,E˫;W`mE7(g`N?@3SY]*epmIO,!+͌)yLNvn^ݩj-]weIQTuGoyt5U23,~0q>Oorw:(Ga84a+f++h0Rh D`#l9Q%#QK&6s܀(āEWk쓝GKW~Wo/]n wˮm0:Vݜn$-TYF=~knٜ?N=Bz 3m]/~_ZҔr+-ͯ M4ʦ$~[sXb11{4YzHN+'|W ]ڰDϷ)A0k$~SR:OˣT z~)IȲc0Gɨ"$-k*)4knQ (˚)HãX<=ҲVN0.Ԏbjʒ!?x{o𫚛ϫ?7 ^ӣ|;v63OuO7Ʒ>nC;7S+q7QRs*uŅT)C ?|\`\V/-}{UJyUV}\蛟oXTssaK] D />d#NX vz6v/ dc&p&]<ƚOvɻqԒ,,u獽̚/2krD1M/_5{B # K9$8[OGyO-e"/2#pōn3[tY</k@`Aou} E6{j7).W`( l.QC ?r((TŽ^wN(Awk[tmmՖ&%T`kF@D@cb`;m wBJQԻ؛m9y{e_;w BHj >.æmUL@ ơ4JBQ K )e!8{_(bzGEI%)]Iaf@!R"dea5/n:0N*eQP6Z |FLGb% *vŊJRqX5c kw($)A;vtmt5X{sVY7WV?c(7o_b|# ,Zzuv(Pftd, )ʕIjVq.ac-P+DiJQB Eed䇊L3!Jl1+j T3Jh #X1B0RZ' A;?bpe>sbΧ|^wYVՃqu5?7Kl>>_~U`.>q䖯?kbCը+Ng?9w} .VN߮'B~ X'{"8ly03D@^;={E] :FAx=n?{F_vd0Ipۛ`2ef'e$'3}-YnI[cAGYUX,#d/H^oªZ.;!+;87!R)-*:t8,}MSX{ u7$VSZvt ig?ӽZl)3NkG[20mqoe7e@n)pH11\/ 1|k`V[EuWO'eVGeSEu=םA )eiFxBŞݞ 9s Tj&jpFlwbC>?MO)e'FPp3QiK5" ǵ ձ(ߔ|1fm~웙|\`N`YJzm$ SP* 젔S{zCɸ36{4\j'52spȅu8ðTVS2+_ĞE")BV7*|$: XkGeGAv`}2$@1oB mRb;TZɵ\ph1Z< l,Bx g 3 DGxIQfw[$5ԇ"BksIL C˨!2bXFO,BfbRc{b9crR 2#@ ?z< -%ڲY[66nURhB-4h"#h @Hq~ XxfMcK5lw":] Q%,\ 0PBdҐa+,EOcR1K^J$$}{" i!+6 D~JGS NgѤ4%hp P;tyBG ? !Fwh?&s5^;Fdt\YgT(hOs:oL(`O("ǎE4i']NPTѫՌHsSsDyn$VR . ؜:DR܈&d`>K=Pi2ud_[dH2e?S+!TJe:rx$BIQQqp{ ?5b>j';r%C_$]UfK(HINB-;x'..yW tH#䂓 #}z*I:TCeHiFL'Hg/_|UG=\狿1_z8Hm(hӽ3=pT#NӚ+|YɃj σ.e],a;nR/u9{G-@:9av~S4S8//16>1Ƹ@]X]-/DbԘkƎoWO"+ (ftDQVK[#!4 ky +6v&Z2DKF=\-YX|v! أ *ewe~e^<;h+< i}M4}uBr*n 1u\,<)QO(Ѐ[|Usx\s90F'X âa"k?fOoEP lXBtt:t:cNMN9wb 4d? 5\.6yGZiŸxGr˰:D2ZG%6Bb$9fI#D DJ)h 2d9eSES]T/*e T^ SYWFR'"nqwsfӇd ^Yo0!6A}ٍ_?MuB ?&ޫE(W0rwNE3B}SkYR >Ud-I3$"tšv㥳-IviԛvKnmH3eJay{U|[*5& P͛7X5+ۻfn>cFMUgC$d'Q) kkM^=c\>ί0;L*Tjb=MT:+^qG+XhhX,aiP:= VjNahxll!tpe5Vi z-Lw9(Y^ƟrVLG"8VO+ '-Nj͜3L %Dld`!*nsTno~1|JIg .xSƥ)a`å'LkƸ\DCeڼ-f7SaN)j55 [;')9̌QD:lW3{"h֋اOb_"b^P?,iPo67c̐1rR]q +uFZ?)WyS&ؒZk2*Հ!]TJhR?ֵ^8g[-@w@g!ÊQ<\af۲O(ԫ5)8 Pg;~M'˱ 5Um|(z<QFR; [Q2[%n5 cFRz*1 ѐȷ $vG4ƂWt>z*:tsҋz h##3ۧwqF5A;BeH'23[\"g4WXzwp<"$,S:LTLy珞J{q9s'5Yf%Vq1׊~>}v?];VѿsD@ <,4c㍔ 9A2( a˼R˹RX#IG6@'n:?R%)̑,G"%4 .SNN"K9+x&Q*4*Ηu=|R]հj;Mu\,lyAtYQ0 Ipn? Z[aXQ`0O2%" uRJ:M!M *uW{aeeoNpA۟a:-瘔BAB~n?"s;JnB7OO5Z"Ǭ^8a[J.%~@׷ "bȴo&? vr9ཻ?:1 tY-,Wnvln>U5/%f0OO_5aH: nqz)xi:5kA",KT>D4@`DdN^fN!6,2V paV+rɤт\+-ҰdL+svy<=*\vjc0AL1T1<%D=lېg.I2oj7UPb#:c4nÙ?Җ2'OI,:p 9W]bIƤNt^'=|0Zc 6 os*Qhԋ~8y/h^-H3ѓeJ sOVDL 2ՀTL0"cEKuWK+Ҷr k5\+~ҲA6ԌDBOU8R}|PaGk?3E>/ʦ}M1,3l_FatEe x]$^!+*\uXtHLJNlV,L'#cMQʥeF*LήEm@'C}!n.$ί<' r=q O{䶭/gUqqN9ɩ:'/IM$QHi\*}FIIycDFq8IQTr)*[3s χW˖>0.jcpm86싙{?j ™c]@c>J4Ю?@fE.@Xl SJLJ=eIA h_pxJϫOaӍuF|{C5ȀtM<ݹY'}{; 'EaJ\/߽o[7NuoZ ^p;_|A)` |.g12vH&4D:)7X0/Š0Mqs4ua bK0Dr* aLU$,t 1 2^-EY a0غC+.+(JĮ2vp'2eM c+* +ҋ[D{JilaKd 82űR*P v!Le:2-z-6\ej_yu];hP_:W).Q!}~$UYu~.hSu +,fo֕98I,H r|(e*ű죜Сg0(lr 9eWo&8F0V}woj?ZɑI<$7a\fv%fuO9iN5svR<)B +-NJ"Q)yW{XG.yFVӇᴴk Ja*l{Y_,'?_.狷ؘn4n]%'s-\E__vG9qGfaj''{DӼ½Qv(%2O[Lfj|6¹D,gZ OAvJ5Q3n@28VCq4c9Ѕ PuWYtk×c LPpWJlBGo]>xWv49|' n%ױLjnM7Ix$bG,4&"@vd Z(H@SV@ ( 0 &z2 0h6=6U UlQf8:ʐFqfD@5HML GIʨlA ܓ M Yت1VVܨTB,,({33VKW&W/]ն|LZA&JUJtvk?E4Fy +g [xI+AqeJցZIgޗ /8ٴ|X?wcXԕn`ggi(5u:-@nIf^IT0x~h{jU^g5ڒ󜰴ABp3V$kB5ooNF ^Dz㸣QC.ív)ա>?}ϩӏlUZ+]i<"$dh]V/e5S&) G٭c*\Vئw;`bI*9f |C\m-!!Qe!Z$(wg\`ASK1zԙ䧮LVTQ;qkFv=HoׂWݹ͂BQ=^+ct+cUWV "b#/Y6hM7;uvbc~Tu79%'uy OU8ʍ|ul+R?u5<dO˜ߦw Uigw,'ګq}ǯ⎭^R%&1:1r0*AH!4N1N748caqa)A#/ti?NB"ʻb>_j9SjN@F8#SgH]X0 qTMѐHLEԊ {ϼr7q**=կl*c=AH0U`hI/wj?,Ҹz6O+Dmso.kNc}b1hw(PRv(%,eJ3uc:XgϿBhe|#Iv-o /#d,ky9K=jFOݾlaqѫ~7 I<{bvzH|afs]Օbyv˴N8ͺM)lshefnMwَ݊.s2{yH~pM9ےZ[>Av}2U7?hJr7-cbf+H.uvegx>ӟJ^PMNz-G;v\B^B1Aw'é2zY.WL#f;=:lHg4!*'ȣh'`!Dv3ŧ&"j@f?`ַ}4bQCF ]X+7664O+<_TH=Y]bW;I$';2w7Sg}liǜ1BSme"F(LD#cD}l+>|s Dnpx륎ӕZ.A\sнTQUzg@i 3g=6`O^_Z@ @fKQ)KPDӼq֓[/imPyw9WezDaaYc쾢ؕB"EX*Ѕ_(@ yi AuK\O}g(aBa}= m,C;I1Bz,s@-zE\_ qk ]6A/t%~7?]v? -P T(wE|0Ϋ.hc`c{lkqT]{6e*oxBMzRX`au߲AY`ѵS!a_ H+]zܙU0TEϖ Ry}(/$N2dtĤQ8&h R|85#$Bi@m+w-|31D>z:z%}ÙGJ>@)`!qxtgLdh8# [KoUF /qa9QbTx;!py>^¯L"w$^"roT?ʿQv#"Ƚl *vJ AAF@2dk_o7 ࣱ;KMJ=]btEL!CS6tT&hlMIf %խVߦAQѦS +cy۞Wq*,:MU$#[ k4\z`UsLG,ZMJ+v>FR%tVZbuV "PslokДs+J"`> (C|IP Z1öp݄ ZPHTH~l[ -BErԞ zw^BH>oB!n(O2Z$Z@%X|lZ8@юy9Ά cܡ$eTsČbJBdd t &J8I +rR IL ro qd :z 5I.5*$(!F(,IybJMcM4&(1ඨ yK*Ѝ7!lmKm*nV$BB tp55iJqL2f5JY)d\4@As:UyJ@B Ѽ/7G )x4u#EӶ&m#5Bޭ@}[Mߦ~ܷ)l;-AQ0ɷ[͟g)^&˚?q8y&E7G?F˳$~0mjݹrwngyjkȭ?`muiĵLYP TDI8:i!4fR qXYLبɏn4P-˺QgCMvv v9iں=A@)Q:" M"[9l2VW|V(5 :y` ݆6j ^BK'j^_dgiP 1vfJ( /)B=:j@.D!F<_fꪺu@r/xX4,l騣й1ܟ>qU%L$]+ ~%EPDةaJe@Q\5,K޽o[XUAvwcd[ H:Xi$)# 1MD")}~wԥ 8e>Kzkd~X iYբ用Ȟ|{]V>21cI,.*e<>-x=͞b=:(;}['# HDn8IvUϧU/[c>Cw._I{3O*xEAU^OG/^tk'4[n6DNwz)$(YI|P!GivK0}y]D*pWd^W4޵5m,ֹEUz8e;[69vqj0(EЊWl nd4=lEo \P;rVi0F$%DY\6}4SF\P'o6'_?Hרc+' &SЧMxs0=Cs'4 -R^ۺ.`g v`g vVy4nr#SvP d!"Z wꉮªf@%yEBMf)x*ļ '1I̫Ijvr#@cens{%rL]QJBXKR0NXJ*ڞcQNuRbƬRn:s,l'A'0\I*L:WIƈj㰰NY`|`VP;,2f\E❍V%qVc-C@#c܉./6J9Q&--󠲌^vp-LJ`ͽmB+~ΑDAz2@0IЭyo#/ =5k]q.uMN#^^@$8"5 (8A(p T(e0MfkHPk,z#̴'0\{@cCӾP̀US9Ӆg9$G08)J&i-gd =prAٽ3g_uuy`Ց _upRC&לk8ƜtJ / Gcf)9 :5_#DGJS B"~ݛw=IdՃaTs%xKZJȻU.x>qt<# 870kh7gA x |O;c3}pcsdd[ +J=zHbJpN>ᦚN:95a4DU,EշsQCp0Gt(@w5Y+sayu[_j<[OL%-s B!Q|ѾmUIS A D2cP9[x!r;!z̮B)ye hw^;&'IOɕsrap%gwP%F#M’ 1V)PK.ŬգU\6/, ?MM.3?h2ϣ6W[  %aDvT xWmMhZm)fjޓĿW8JNYO^_al&eDّ_w>}6LK\VYw~9{ 0ϓޘT['ߘ/WB0R@Vr* j;i; IkI/a5XW]?ǻ!Q oyF!HɇFZ- RP!` ܿcnAg3w?;gtmE #{/?'U91|(۪>_wQ? 2h$p++uw@qyyz+kwNハxMȃlzA휎ɛ_ǝxHHhGK+Bq.F{,ڼp 7?d0-:ߺ#>5";^hh}MU%QWZGfq~`"ׄ#JC$RW5Wғտ0kKB W*?BG uYq`#),B+MvN\H&pw{xKtjyY°~uOR7=l&w5^A0A@I>f-91Ǝk e8Eju].!^JL$J[ZTSo_US-*[Va~9x g 3%חξ ~a\d(BϘK^"ܼuW0Og1S %$*ؙ{sb.]~?N(eHʹ(}k*F*T!5!\EStJcͺ)\iExD떊A>u;7}̺%hukBC^S3qw5dXmU}|%&ET굺Wգi9k08ߌ cnp{w];DL n;3<;jǣ|QOgRdPLFbme'@kq< i[OS82yC  ׶`TKJ ?.-/D!oY$?/ "ה*Ik\k!9ˉs RXtqΙPHi1yq rnuh<}#0 URIj01IO-3`5cċ ӕі+5iBA(ZBNo 5<cjBCLB͸M9ky!LITRYªh2sI]cA{Ya dg<oj01TrlJܿm xJyUav0?b>lvwq~B=8vf<x12 (ݎ Bp(Fvv2Ǖb|=B^=P EͳipH"y=cLf9-,Vy~tӇl3g]P%!N#;Lא[i)2h o>Lɓo,LN"K10WBvگRH)s+mb=e/VӽhjBVtM7ܪ D/rnF=d=Xjai.^T坹xq]7]N"ћTL(]y{ .O9E?GXiJJѿ8_i'QzŇKM%ѕ W/VIQV&Pe:rLn?Ǫ4$M!O}#fej2A$ǃv QĔ󾑷QngpõӸA#p}3LtXU&ja}ҫLt<۵Se4?ʣ%H3K급)gIHj.Ɂ ߭/u}ɭ ӵh˾#(a֤Idl>NR)N,0QdRaU4I4ɚucXGnĨN3X wgݒ_4ֺ5!\E)AH눩V1SSFi)qkwRc%eȗCa[r<GxvA6ͺ1~q٦&}Q7k2ȎQ6{1e>M0if݇,e$&p&Y 9?WwunsB1x'f:}P{|p~5{&Bao&YPqSAro_ꣃ8&9tASt$>~{ld/ܗ72I~cV*O;';^IVb;Pބ)y!̗ߔK?y~̓E+ 9xoƳM&˗,Dۅ-܍d:.NWM;Z=,l<%s|i7(&.nC dc§y\Tv/0w00O~俅1|>ЮIӼo/h> αcBFd9̽rgMczc7 خ un5FXtwĆI(L>l;;+og-2kƱk=z],3w,P7EdcbK,L1T%:OvHRF[4JDL,M 5>mN=8Q8ERzN8TbZi< 's5iy:bOH Jk*{X%ϫjc6a໧|Aq .^A9LsH᥆ȺMgd&_M |4ɋ;m8C(^Q[C:sEO ϑcc ~y1xGK>b8~`, ckzpT$?`&ʶ>(QX :^P]\u1+LrȌؘ|[n<ͷ5?,>;\LE5͕I)[4H BJQ>IsR@4()tP3Mjt ۚoaT&[Dժubʀ2)_m`Z0!hԐ`w$D͇E޹X8Qs@6>»ILF'Ă].%L-vrE ;m  $9O@n) sEsXd+2V"~oj,heU~o5tƣk0ŷFJx%5c>~س7_As-[?XX.;P(TNhQ`GS'xUcSU.oB]s4lK?Fq/ pmpFD{}e%66KVy'BDž)J[%՚mID78x--D!o5 *+ݽ^.k KIrb^96͜a$q].T~`7t\FA0!DJCTa(*IENe{ڪB8Րco쾵dKpK̞d2K˽ۭa-m^r9lz-Q݌<GC)c bZqA:k(w~qKz~itNk[þo{?cUFFGFx'B^1 ec6|E-ή*{ni]'>};u (QZ>'[+:]_^-`KMMjZ_tYzl]Fevp`#4;AgGy ~IZdvӐC3QCUg'*g TzF[$dgJ)C|cnMEֳQDeNqh.x"7Bd))y vL9sϸB(lgAaE2,e$EU#R,hfXi$:x; #w\bPRzR IM1pcP>cix$ZZRƍDI<{`g 4l*) 8 JușV3QR{&3S[ĐCЪF , msEegaqKòJfZ$uv|_kwZ񖳳%3?j_IP]g|zuɽ.X]rzd)Xu&-$JgH~  UDCn*z{M:ފdjS}0Mj%a^=p.Z͜JSTN nrXk)#UM{^?}ϏݧLMC??,nƫ7zLID)J]ѲT &@ynSH#>kƍ&f f'-ww'no)q;Y`<0Ok濻_|"m?ۻeGkWzj\]%0QqQsg԰wD5W9_UjV<@:0kh!Ab#lt@PU1R ![EC5`?6N_QBNwĽxO$~GHf厉O1V&?.nΝs-S DO!!@R;l!}ŧ[jW6Ճ(,="\V"Z2V7pdž?UG8SFGX){:8aӊM1&nGxƃGc'QI1qB .z{&lAo9KcP!zu A,eƠ`.{)r260~f-Hk.ְ d0ۃ5hR|ƠϬ'ׄ:ְ|3Us#lk^@f>c:h}fam/Er{%,ڨmksg|C_(Gjc KXvoiik<5AV"dOB{fVc.yhu1GjC?4Hm?70˳z%4BQ(jruQexH o,dA+ z 2{e1jii9lʠ 6C >KgY).[cEwOkNt^Y^l}MtF"'_Pq0y[QE?9(O٨]@Y` ;c9g-$:G6R3[[ԫǵըVGz} Nǟx9#fhre;I֟Ze+bw%rG_늜lq Y7cŀ폄,&a]>w.($]5 D=QM[pN Csd!0e,Z՝w:ݮj z Rnc.uzz1IFf뷧MY&#F)XyJef z6@j5W6; 2rGn"sNجBL|`6*_j!=mma̖*wBHA@`$)9:T&h͚ԦReCd P*1$|ahrrC-z-7h 09C%%`yYknE1:I gI RO3m!,R. S LB|0 )nPR*q;I\* F慤h)&Ё]đň:0< GR azu3w2N:$-֨_&4y ݬDȥ|CjLV7# ~y\N/!ݗSVa?S:G?,#վ4NP~wwB:6|ޝ񒲚rgA}vW_oO#S"wq"OߝKwurnN֟X!~6+A_Ybms/ ͣy^ߖ;L4O36{l)X Ƥ;7KzEH94X@%Ph~T]f.`ȝ 3້;Tt5 \\_{ﳓEA `=9x&DMvaVjN;AYʞcpQF=CȮY()eh-AZsК4 v^+Y8 8%YR+&MJ7[9d[t&-0Z\[zudRXQP8JDa҃ӉVG9y2)ޔONĒ3c+Z%1ZQ,u~`=eJK@ ji|z5E4n,Lғ/dɃ56g~*Vg[:6Pnz\#E5N:X!opXݾ>}pձu'%dr3-y/ lzeM]qag{Ԫ3Ea罐H6Hû([#?VZ>wT1ݭN>m^L}6fn8>'&jt?,mt ]Ji$]&EZ}>4, c#? `(2ir1mp'(gH'-맔ܗ-н "5_obG<"flWhL_ ?HCD4$@DwNdVzV#X+Oni3ʙM`\Ǡ Ӣ]O!t˰?i CdB+;1;{ r +n{ O[j!N2eJ7'׷7?;Oe C@oSdcbML)NAե?_}}^շO(T6:o@D fѸB5oCy86 @ a>r X*7ny4y2{Y7咍GPFA6 Dς+9cɼ|PYt4AUA]hhK u܃3ek0V|♊7Dwf`B*^A [JJd\45ypwm~ rVEy Hbqj壛Ggds{{f҃]mX,uWH@GF'dYKDdJ z@Tr߻iR(d#6^˔2 DϤ :E5eLpGuahp.CR&GesR6I1Ke ̢0'c.Ӆ6 O&ٳ˘Rt Mf**" +$G"Ff$mJA go_OJD02B%|tD$nR@*f A:kGb'@Y2-BLVwJ0e8Ƨ@ڐ U$NXPM՜7fNh7߯QmAnMRfd#i$MY,__D)>{ל_^_k9Y`z01VrX:HAp wys^Ă/熜̯Up9N3ID(MV5kPVX ݌CH#VU>JP[BąRN\L^,.k5 \ȩD[}!jǻ +%DyW20fvj9֡d9*5g-^21PN+y3E%+I#Qn.P^ l6S:& rO1ڞ9)ʊi"aMGK1Jw :|)6jEرg@n A|U{`4*1Y ݶgj:Yl <ǫ5/q '${H-8U)d![i\fQ>Rb3n>nk%f+Պ~ʹҵ'Ve!:#uj;;SҠ@QWjiZA,KY,YVʘ2^MAI A=Y=jݍ*Q`7ئ +=d#tCȍO&V82\1–:a+"-\Ŝ\$xbs?,A*kq|?fPP܈$"eZҁd E" p+x~9)W`F̔@W9) HPW#9>y ?!Ppts5KŒ)=b~JQNt;sE,5(ka9r[%dvT0!#3RY7 oqt>jMЉ,nkF96 ϨhT.ުR^%Yt/Ռ0 z<9AɆr (H tdM W0ʱ|)5YJM)L+D8zA5#RR̼*K3^$7Yx2Off-_Y^|QO[tB<"^Ö /p7TWwRJ0g5Q8|p/ʸJBǴdz?d@V&UI9H1yџxFZso;Y_7꺴k1ʬc{>GRS=vcOW'0]T?k̼U#mlO !oaƆ'- /kF3$g߮7l!4_m +<ʝjk8;uwԳ./W?Wt_.{rDL-1VmRL.>2D Wm۰`bZ=7q M]xAvG3ֻ u K-nNBwܵyl Z U=2O3ޘ50%&Kl_A(1@cp@'m=!C>R#ێYC8~#1XZZ{V?Jc_M@qCg^SMsS]w֢Uwȥ#R} ,o49MJPP|o\$wn/#y˴)A7yNwgM cEya2xIwKt8jd;jp4O ;|VϾiswwwW˸#.vc)lP Ž }X`bȇ eLT3ܖA*OBY2}:(u?ٟݯǭ9ס_A :oeBjԲQlcsMy˘2( Vxw /_HDۓP%M_X9^WOU"2Ǘ .aA~j+ޮU#yXNE[4aEFȻ) #xvb)h+ޢ}%"zvLWyHʈ+kg(^BQQ2LjԪpRr 9, <&g˒QY M qG~V=ώh1m2'zlAqDLLЀ $RqhϜ^?+d=ϔ*wTo)]/.Ɣ⻛Rj!\Lj &,PCu iYm@t.8BB@2jFJnGƽe4zA``4ڎLZ\ +Jm`hIK(y̒2jS2&cV.PrMZ}IF܀'%ՊȍGK-*~<(`8> ZsJp`p"wM-E`I-El}OGO# +n;5jBڞ"X6ׁ\.ex]SloIa<uj*-x3%9_e5"sTΉRl csjב1m:L$3 t ,8gCPVcsJ2IozA@.(۲`FfyiRjb\BaWrq뮚OUb}w=(#[穉j C|o,#_VF{;ik׺j\d}h>oRKau攱u1|7c"tqLjL^qP]qC:RZǵWdTD%LO-ё,Vv1idJh鋃KiQTy 8cLi.F/yP7_Þ(=p&E 4k3-H/\:gQ$YVzhL W.xp13\3&Ѧ ^*4/g֮c}vε`0'Z^á9@9Bi~2周skv}-Zj#[@ip`kR3;^f+`/9\vR$r)H$KmAh/OD+/l ovUGy端ܸ8&]B/ql'6f}{SqplRw}w/ҋ_|,P7p@JNA&8ϛKw~fSK@Cn.㋭d~ҒhyܻcbZ%al[75tgI2e7.o<^ Tuë[%PSeKc 0v4w{(o黥PpBY3vễ=P%X-F*R`V=qVxYc٬, BI]srw!\<󚼯mŬZ$&#RВ Nv9)RiT%5-InK~Jp E* (,+)rp,Gb`d(1p j\po@3&0P6f%W\'d{e`DI'CFR[ZPBCZf@i$'EP#dZ O#ƴDy{CVnB{ .ՏJ4FOzd6~/Eѐ6OQ3h^,[Q;;'h=RW<<>cph17J(8gIghŢ~'7_KdIΌGr"kq1!l XC1YF:9.R2?,+Q3}A?'H& k|fowk݃>#w~Sf}y.,S|fcG>a%]FlVۭmTL΅ =ʻU)B+8X&';*qk#xFmd˅VNdRښgCIgR혧[nnchWюNJ3A9~cKd}˫jR(+T7!9Pɮ7 JLik؞-@Z1w,k!,l#K޵u#be-Y,^ail&j+d$;q[lQ[u9jTW,ɺ*@^g,/4O5z hۋ( [?[ "$ѓme@Ed^F )d)6H2zMIx**- buVW#`S 8i,l1r`eqP‹(JdX!M1Hl] IJ c2bԞx|K*HAH됱M%$՚-Omo8`s Z&(,RX"aMTJi6 kOlaقLCYP)y&-P"2VˠwS)8\Ս !^[Ba+ȺB`g+ KP$8D$Jqk[NRŤ]t |4zPs,l^T^!-*jQB,x!-(=0!8t"#TéE KfQihDc&FlGD r3?_|`g'|ps}xwj/ƍ'KR\]Ka'8BAGaW4̗t5@VhaCy:|f?h1{諜uߕ4ַh3n%q]/`h\U!Ԝ^$Pz;e `h7,_'b>A&'߷Bb_pH(ZOG~V}Jw㘈squVwsps]>^9jX`7S]-ճ>a v'c~ 3ч/P,o4:boHӉKSM'Q!eJbΈ25%vcv :t$qHM ^IhsEqglJcUR0]m߶PUozxgUez5c?^囫>s71S!6&V U?iyaհBR~ ZWe18DXdǗӷ.'@*/'>V(B42-6})?Px(&xٛoSm\#Zpd:㩮_rj +l}D5):ҡSeRq;ݟMݕ}4CCsS]_Y7cAX2hQb{1}gfi6=Ѻ͡!߹NҩM-v ?n!X2hQbaLAwhА\Et1tMFd&Ϗ%wL#t0j?Eum1WE窨Ƽ4'BK3eHkNxǻ!َk Ucf}OeBP(JJjEt haƒ(0ckl:Jk:mtu]كҧ~N5(`-gs'*WXrw>\놑fSILOBd͚ )zDƛ?iv"m!Wcjr?^덹X@E~ Ϳ;B3k,Et -T_EIŠ$=$SIj"XR:hVf.8jCfW cܕD(茤 Hݑ+.y'(([̖6n12o-EILzyS\g@3eFyjn)W 2u1tNArV >Sva9'XԲ{P]^0*mP{K޽n-I%1M$K"Oh,8vM?1{;- =/U>ZFUh{|@ٓϟ[u"~<{\vV.>ߞ$-8\WJbs ۧg9w3n-rK,%P!o9OB]":$t.,[bݿz8I`M.Y+0.` 8hNECzpK_LJ5|~OSvj~/:ojP7{ TPILvUt %ÈWKY)~u>~3L6'ONW*}^[a*wCPi2l :o?\h'vן#urMW_w[5}>nx<($~oS,)je<߶Ӱ_Eӫ7zw}}|Uo R} p[Ln,z^3'v_\::TY'w'ySh喛i ɿϙA(M/,!M>Ծj@_dӢ*ROgPxۚu&ioY/_u}o7ǒˆhABIvIxVNƼ'P;D{EO${EGrVŻ?o^d&oY }%zXhfwAذw&5/lGD-[bh̪@A{|9~h[ ) afH6u1jx{@$%ڡbF j0zڙJ9CS&002E7Dt׷kpHJMxUIbLycuZl@+1(&]£h-h~'oX5.8tr1KU.ze(T 6f#UvlkSh ge#>^R/_Mo>RkFfӯۅeViP+a??~āE([x×^f&uM-.ÿy ;ٯ`:]E׫rzu}s?>oj1uׯiȝ8bR3}w O'HG'ڢWߏ5͘P"X׎XM@сo "dK‰C7@7~Cͧ?m߼;r鐴<dСt[q?1xf6 hBriw2l^79k]ճI_sŬHt #i`Ӷ(ܳC98Fvvs^\ÝL2=hq"U{8%_X <z Q?zl|$TIטd 큒#b(!.Cl1o1 &*lk*aRA8e*ɗ62'eA 7))R5 -1NΛ;$ٝv[bGeG QmJtf-ǁ;uCi[;-xٻHrW{YCIy Gރw#?g4nY;MV{R!̬&;A(&%v%˖'['`Zv=RGRũENC=BkVk\G3.F_#U5MUc)ӗOGHh IV-uW- EZBJUPES #pK1FhFnZ{NCV8+%k"9FnWmIev|lI}}(0+vg =*qpzl,=Iھ- Y6"{pL{*FlßzD +Ð{|!8 <g]O@}4'zÉک9I_/  C.oeA/n+%3g V'Uaeuu$ḩ/ IOI,2cJD'jN VN2@oqOzJRׁcэL<]6;هOwc}pÜs^ Y'TƠcyq9G3?\ =67GAP2ib 3G5>EvG߿zI^._nGr(k8W,ow]<<|ݭp=63/OÇv'aě[ڇmyPO?W1UW^c\z/n| #Ï_qf!oaꠡ7xZZ (HKT-u!frqyK|$5fcH.G@g3/檦qQkFF!Ĝ_)։HO ` +PG@!X )#"ӭ9 Y"n?'ZBu߶H0 = gNS䵆s^׌mNʮZNȕ4Z[?Ʊ2+ #}0lYP#դZ`ԥ 53w\)!WRɲse 6U52 EbOU /:~ڢNXDe99zۀ%-:TY#N&v4BXirӓ$4͓L dkѻ|GSzPAQۆ *MU8F(kLW e*oɫVRiw9BoO *b@<y] wc0:y:W-̏CV$Q]J+59x+4'>v1]9W+SSh#3jbX3j8w3Fۦn,-lS麚H6zTUG/gO*0x v@d~% b-pא"'++bN9 i5r0(?^KSZ7zt#gEZZ&W_JIǔ-G 7rGgub^ G䀍]-Un7DnRSO/ML]ٴ.ԡh]Ə RJںhkj۱-ɜ!~7⫧kTv.7Z*n^ q7*[TryzL,<x= P/S@tN[t$vz!3#2Բ+ ;30ŽcgwUն(KmWUW7fĆoPĝxtzI/Tŭa9gkFL*h+TEM*4-9זb:׍Pc{=` QR;,/&Ϗ_[!Y}{OİK08g;>~xp$շI#Qh{vO4tlB%$pBD+atIZ_ԡn8WEx%Y7=*%5Z+4@圦FBbƺ2CS*+iS&~Q|}cTBr/uDjlC5Ocrʸ3`gcl\`  K߳Quzįl%^Yk`^pyz^ߔ41DO@D(1Dzbdb!% ʒ-#=(vlNnIAWՒ1 (4Yð4Er.!`TKr%6i@hn8w6=MC4gF N(CZ܄ooI᷇ŞH| *+%'-HlXwYl +gdb-|M}>tF#n./FCTD¹D~_0F Je2ۯ'ef/R~ :%xjIYU|p| Bc $ iB*T "GFi) abd;yqPś!5) a}|uYT>m#S'Q;χ5-}形LxY~6?jUq^Xpv^|yyz;WWi=sYiZьqA|)|Yp"<8wjZn<(~~ ]}xh_L zX1g[/P?w4Ϲl^Q`V' ؂Tkl/^U)nMM&;kYYšZ ȨԌ}p]o=դ^YsCՄPki"`Da%*^Ry!UT`g y ًNU }=;<>fj})H f` M7微 ƛxk &9͍zp }Ek # ~Q<@).wC5A8_l6 L}o zv98zl8k5GhNT ]JC \QY1DF^U?U40׫*onxH^Bj7e~ <}ضw{o&1`IеCΔ-[W+[nKEV :\gk+UI:.՟x}v'~U( ]w+U7y}כOW7pQdžݍ`?Hb>UQSTlRZ[U,9KM٘mU˺T^u@g$.\PG5աb)$YKl;XէZ_hʓ~aLI|_fcGGo7Z#nm)Ors * m,lJF(:{Rh]k@X$7c/_rq_Jkf>$e 3];4:O@6*0_UۻC_e_촟l^47?t*)゚\aK?l 騃[o]ц/WDZk[7^=PLz>S޴OK#]7mU/jO$"amΗ$T>NXZKiF)HN|zH4ֆl`1$Z.юJf4^ag7N{{I7XAo؊ٻFncWXzI%9p+Rljv.\0+:)&~CRCC ݲW!}FhtR*rX&aDf&q0"ц$E6yAi+ͮ|,;w7H[گyemR"a `H)RP*(EP,Dh,(E!q y8 P9b \)r1D\HIu afij"h22bɤ&OMwl`ȑ `S2e t[ֺaGK{P aN)fL"̈́Sh(՘$R*B&)UP⃡V2{j݉"H(Rx?5>O,M,`x뼎G ))q=OϩT\pEړ ʃ8Ӯ%+PƭIEΊFK(kT2b掾7!Ty"gdmZ+5 Ii O` UC#%/u@hy{>3k pP{[ .M2fj)aV78b^Ռ<NB}4 IX9݁Y(O_-{mץ =afz0pe4pKYvsX Zb=%RS|-AFSFwD(CVro׆5!w5>쏾v*>xpz.w/M;OHbD/f}/ ?O ޱ,|4/inXG2Yj Ό2vs'IZQ g9!I\ x%@T+v}zmzy%C?{Qbi)HR$((9Q\h2Ĉ>cSc5(i!/tqsE? ̣[>=RpuWicU`n9AR(1ܥlxR9Ň{Lޏr:33Yd DK=AoѦ<>?Gg렺ޢɿ&sẋcF q흤7ish">xµm4mԀt?e!Nz|sr9\Jy6[#[^(x`w8|ة.|: >~n#Sp(A)Ob$Jd$ԉtn4:&RwǻrtO'o߱{"|AfT# w4w[͚\N sQ`QM{&w3SO-‚˨}:P3yGDۛRS,2wl8F̋4|!~ÈыHCP?Nq63,̃gPԤ k'ł ըBCBKE;@OyP0F1A!2z;(6yc>:,߽:iDtCt#:b@,-jh-AS "ڄg~Ӎ,޽/){@0FJ魹w?o%z#͖ТgJ-ESJO ,ds9E86!}0[@}NGVIP<&5ZerV3%,/0eZIh!ۈ= OXwŘtssl lP ?=yCoZ߻܃KfOelZ Ę}W rJ_lvx`2{S>N_ruy ti_v& rSBmڬܿ'| NnDr }Cn8XG)`iܷr! aBb}5s _$Zxa#h+w]][R;t]ͅW̗U7]<<%oL%edEޛۯIiwL;-rjP5 C1$m <:!PrQmRUFryMb8dv^$cV)Nzf,(c;{1vE-uv4&f6`v/\WLy  ^.gJv*TN׎q)ΣGbn;X];nunmhWQ/lSqn[7Yɝ}B拁u;*8nunmhW Ҏ:D3MoǍ NMbgOʯ=-m/.$az|Op:\{q@cV 9tX8l n/q|0~Xլ!L<ߨG FT@̮ոݷh~=WH0\O,Jĕ,b\pb!zlx|pͱ"vfzn.2X7s]s f2)i!ak6dWj.bb1f fԼ7[=`Y q"ː#_^Lq/KY;IHN<;@= oGjm`VPȨVӈMy3qs&#)(|^&b'.x˝.EiŒN9:=%ݳS)\芋+<(H%{0Iӕ8;]%\e2W-O]ylc<- ] .ޗcا-fK.z E1T$ZcH3.ɤЈ SHNSi#d4ب@(E[׽#d9flU"*4JTeJeIe*32MR)9",TfsSJh` /#Ak]_P3jlz20ZRdʑް810,((. ʔlPkUPƶ'Tqׂ2KvP.5)(lt o]SSn̨l >Znv p43`ܭ@~XFP_/>R()kO\WʳL{m'7닿@w%&r/RR;l>,{?Q&1? Om6rE?_=@w꬚FuX?[.k qFsH>C~Jsa@Eږ~4S.Y;fxOIr4!m${v&mAt72޶ꉑ up.^}H]lF:kD$)oW`男0ČN]Rf2U۶&|s]j#jm4NbbT1n*F\߯z7ɸM|_tT[1`4{|f;{;Sy]o{hfTUoyG`6K撳ЭK)>H5CE4NENqD Ɋ"%#tiTwД R!r]?2@\IMʾp)!ʹaBIf&<RD-_ٞ")8jO5E-߂UI Ӛ#M[޺F Fniwe͍F02) zqpx^/P8C\1}"Eo ~nRBf~D"Ȅ2]uʖ+)dPW,#׻$wy—2t  τD[7JЊ3ɫEcϏ3.~ , KiXC <8_f|4M J6u(8k: B@`$5D?SD4Afq 4)Ԣ r_Dm7Svn8h gsոWnp߶T^f8/+r,א`*+nj9W 5nI_.%pˊxOʦx[&ID)Xd0=+qw/gS ^8vNCJPcvZK%Tw nf\)!DHaG,bI R3)D DPA VUA$ؔ.crüe_7"u _V Yj4w _“D,rC$/%`㻰(<{$KxQ[,lپ(JeEُ*LJQqtI>o0mf`wɏ"&S,c"-"n7~q&Y>fUx;GU"ڮQ#}{PY\@fsFXSEͪYYp$]ԬPIirJE**PI!kՕNլм3o`SqPwH4e4x{q0u$~1%ePX$LσZ[9a ^i.ΉiXe黇/6"k2=Έ@ Q9-c6G]!?e<6T*@>]&4~_Hj.Ahh&Ӂ< }P o"tA*0Vc@T[FbSPr:7ĈňcȰ?lXaf)0Ú3ױAxdG 0cD4K~yl,%v6gceiVP#h}5JFo~עF)կTwꃜH‚d*ۧ_j 5߬AX@,=0WPbfTA"5qaYS,#4 #!obXphbJ3atS32v~ƙ!,,31 ==|[_/1Co-Y\?SFzp_1~bµa]6Dt+%mH3]e:V7!ޜHCç穫X)oXzt_ Ƅ0Ň^,U(:K5X.!^i^jdq2CKizc4"SlVc@x zsp0w$@j0"*:q%-9$cKAḿ"?DŊAb>!M*HX-?pܙ1 Zc׼1!FPYo/vr:sxT ^Q]L$j#I=EJ+eձ)@vػL;HYbPSER S%ޠn9h$Xz<&hS+7q>2_]x5Vݛ\#گeDOL{jrMߟü8F.` ^m}u>$4pgN;R92#aJ`297[l}juR-Ҡ3 z)kX6xg~5CS]x?L$mo@~g,lMXo U`Qb/,+4' PBieeeS9Fno&n9=C1'.,lh{dBEFЅ|}o^-(SF=f! Ǚ8]"~遍$w䇓LX4I6$^aBBTzwg'kIyK':X#l v u8w.x }M\ը68(qOp#CHJS[o߫cT"(<8cy$??Yxpzkf:~hW%]a&fr%_n-rh`5ξXdD,sYv HĐHsQ c-< >\UR\/0@H-i!Un5E$ISUUK}?Ƚj*7deP撍e=f OYw7g^5gD FuY n0cF^FXvчKưPy0P'f| kLg`9Gc$ g(b ߤڗv'_}jޢ<#(OzxW)_pSyY8mu/ow¶T밭_|Ȳ`iXv·aN>8RQB :Tiv]]Ӱ;HR2egmuJ2xe*Pј?ܤf{΢ע-ݯM1Xp\EsZ7K;Ֆl؋]TA0'20އn.I%H⤛FC mNLֆ\cNwiEJ ]wHˡE uL$;[Gq;;"Ϳ5vw`Q\'I)V_Lސ4C CB _: 'Sf}"!o? r-DfכֿB~],%Ԧ7o3Y ON+sw9qZfS!_6%`Ĩ #ILNg{YNy^<^\aL+E3Ʉ' {ep_]\p/C_F\r>E Wd>d)oμdŗ0$"bY,2-$HNBYj,giI(qa[&^ƅpV<שFœ<[,}&\D}F7i)fUb,tTcEoZKib=;zj IgK߲Do[7gzj$h׶mk)iZ y)tT#JxoZK1LR<㣳R Ӵ4QF;-}Zcڇ+u*&W+_eaWi^•\|[ZJ X̫r}YVJ%6RXV>!MT>YF8M{j2=>7NX iex]%P=Pga ̔׻U K''YIR_ ,:g &l09=+ԏƿqle KYE}# @r< eU]F.s`w=s]Yoǖ+^ ZΩEYLQr/^mNHOU[&Y^i3 HlVS˩:˔ $QI>$4TV=0CN !^ng;>)ѬnYX~::!܋LTgMYΗ PT2c&MW/3ˏɰڞ,B2^w eYLe6@ J.xY0•A.< F " WRBP>:ODP&I  c@U`PR<8KiSFH3X22tGT*fK<8 @Lځ!xbc 5jbj$$ cΔX&J(2M2x[0 ~5Xa7m|Y,"`$zza%g91=cIZ~d']-"40+ވR`r I\p ^!7vƵnE H( D(G%P]Ј플,._LIH璴0 }ypw /CR奶DxZ,s)kI^Fv L7%@e֪M~]6:\<4#.v >pR^pzLOȮ)iVRZ_e䢽X?/94Xb >BWڍgaV:"*9ӿJBpKUʱe]b'm*U[gXZTcTgbXlJU) %*U.GejcZm6 .Jx6BkuPwV/) &_.PL=R"u|kB'V0|{}ԚHAf4!.Z+R %N'3^sO(Ͻr*k-{g>:4WJ:űt~vLV.kTIwr-"[ UNI٭'ub:UQfvЊb{{7[hNσpswo^  90'bb[Sp>K۝\{ǟ?/#Ga{F{fYȅAi%<7 ~pK=|s\~a|i"kF]6dAX'_gf5޶ڵ9F0+ԮE3٨f07 u׫}5=1,=I;ؓc/Tc؏g~x\|L}U.֎NcIv=Ǿ=;inߨ+"tM5F5{~w|nzx_t>AE9m9TQ@ԲOj7ONhcT0Dh~dZ,LU0}r{uvn'(,nE_fI:9JʒJzC*iT+&zy4y/:i]E'hvCv7nVTHdf.5<<.'i LJHlAN3&&`:iIg[L(QUu(:(گQ6R W}5E}N?*EIQjQljӚBcp_SDJ @D:Ra@ Ȉ"1RhMPjb*yĔH)ïv4t.ti cܻ*x6]BI^d6iRWSj6Hd9{wDh~㦺Py &7/ ^TSڱ9{p*d6Cajc9]A<}XY.g4rG](D)ɗDI&lޅJ4gRmq՗&Q ֛nW[لAOvtw V|kfIݟhƓroz;O ?y'TGhE=hnNo92׮ư3ݧ&cn7N\]_Ef fٶKSk⸱~l9'&5gLjoo.oB!eңppVDHg*kP}xǃ۰r ] y7FP-NлΝ~eC|c KPIS4V{4mi1涺Y*$r}B~i!m)E&"btH+ŲiCм1ɎzK74^lFiŎh(*uC^4ZrOxHkfݛՇyqhe:NIut^$ve-lvLBt^^/7 ɲUy1[QZLgX}bgԈK$gnxi}!HKジy.41qFَYD#Nc$P02"TB :$$aa¼ MԞpNA 1Vk"!( A$"Sց&!(*$Fs}~P)8,&n&?3!a,qz۫A2^,W[~ޯ'cws `fk7DWj0 % GĈq @A˫_׳mu=+25/hN!Eɬ] Vѧwmn-llSEF] P e!R[/%@ϻ(OE.wy-=k- (( <]j!R3yK\Ks:KBKI}I.ZzZo,ZsZʈꁖ./5EKZKR&^h)~ZJ!Oh9j)W~ZZʕRsgHݙљ."͟L\nh{uKK}L/ZzZ<Ԯ%Tni5e5^SzrbmII9Kpμy[|Jx*a0\!. F$4t {Ra%ygƵ`->?;FW_xGNݳ'D6`)lʹjim"Y/s7+\ӆXLXчev*ń^e{#۳evM_Vc?6KDqƆZz)"1|R3$SWy_΃ Rv/u]-'c+ӏpI#F>8|pmb8 єƜQ% UE׉ I $I(4UI $Q"ܧߐod<_T}S_/@qVk;cwXifUUɼ4=Ǖ_zw:t8r/]w'[T|sMAǒqA`HԱVB#j$MP2416oHᬖќ #XͨP]@g80!%~R2ƲQsD FđSNIDKLFZҀzbHG%\V{r@&kәe$A0T(#iQQ1Bة.\"+$h 7&2oTBHLq' R(t3^2llr%,NRX XFBR)\#JmO*5p !( Svq%DgH-jJ90(zl>Kma4=Ѭ8XdM\ ҿ]HJuo|ɷsN &O_ZXpt`{Q82!yMֻmU|A5=@o~ky!¾4.mțRҜ? V k'nI \xxs}]yoXrϝ#c#Jtg\uYp xQu[{GD )iiSenhFKs;=^MnUV9 ƶ0[S1[tG!\ r[@r ouEzewM!fGY{ {6˿;dq<~Oki`O% n:&?:bbzuWl\smglo;gW-LȾ<[-wE+1peyfah?8 trlLAIrj7 JRgvjeBI%DeXKiJ+ R+.xn!iiWIy.J/?lVpS)q=<.wr-%vysu9X\O׎5z|g7E/| esw$mU# UWբ>o-/6K%v5wTu, !@b=d([OP[ӧa ç{[S1jo-(iy/:9z 1'w{=:uWΓeW<[_rbu9}FdSf#O6$gD6=l;idk ZnwZ0-%3kS98Mպu(^TWƫ3_4x5(5rQ$x5Bc8?=}Qo=j2ͳC}.;׻gj|1R-w:˜NU{Z^ ۲ oA`~ނԜKaDO^Ĵ#^hi 5,5@Ћ%Ƌ%dj"cPN1dD8D;Z Qt‚D'dI@U=&1tyDI۩v:#nіpKwy,Y||/l;҉ceN R*&91|rLqA^ۥ1S3dsNl?-㏳i6Y'^т$\tMS^RdC'| (슒|:JdjH]($3려g?LR~۵ŗ,)$=_qîޛ^3h&ig l}NYSgD[ k%ߺbvIo7gLg{6WG6;!'ANlp|4=+d{wjLS(X&Yt)MFȷF%˶%paOV3ğu%<5ѽN0KANJk&% ?u>y"݄y9 MN8cgp*MkjRFL5Y14 r\"=a;>AR?`TLgyV A.O]su(A K5xc7<1A^|f"lO^vddOz#XSJ_=ON`;#k8o5߂y< alG@%= P {, ^:DWua;_yS,#@!N`&ӤNf,㍌?6hڊ\ZC>{~;0W&󳛗#>te19\E(CxX~BYwUS:(̺|JS½r"fbQ'5œF P)5ns^O*Aey֠V\7$畢 mg.%5c}Pg\k _X`,gspy 8UPwDnZvDVwUxĈswGu;1 .: lo|<Ԋ+E# Pc_{t|$!HJ4^)FVwr:du R,W?$e, ӏ`]ΎW'~MlдZϽa`$|FY}ejE;mkݥ*wQlŽ ӆQk݋HyĞ ׮ҫ>L &y?lbXVOar}@^]8\<Hߟ&_wz/\\i/|vq=W{d9?1≱q/A<]2NMӓ^߬fvkUKH5T4*dڅϽz?}} $ _/Ɩ$l,y ~d)]m4|3M{?f2^jڻe~y SNy2 =)p9)] )_{b//?x;J2^>MշS`f>ϯ̿{'0z~73짷u-.;ߘ4JOPBs,sЕ 6?Y:F, cZ80߶lxX˕H_-G6=B-^Y{g.%%wQYr,t`1{:%'\-HkD]yU>f:GF-8{.F'BgYuKEÇvF؃֔l;#T=FhkTfh-kG~~! `%ǵ/*}B*`N5BlLv߽@VQ{v=);Fk&h;k;a,g}ҠRu'֦L]xNxa@ \9>0%)w&c‘ Nlg۬*NY?9a1-\8LY/`(vN:vrUƜ{c^ ^rQāb \7 nOZxQݤ6}5+7V4ohE3@0Gf:2`+N04Fy#xۛlHVÕ<­/lYb54[+1MٚޚK%8Yj4Ƒh+8 Ǒa# ]tTQ(svaR@(dH5]'4VNx'J( b1H -hLP7QBmH"rǗx|+T񣔤բF\#!Q2Ih11`_GZC~ThԊ{>Q02GYWJ(=hݣ'0k#! ,56RQ0|)"d \Id*0j"$ZMs{ؘF)#S7 o g/pWۗ揺]"TJ \3x>1O⁍ASt_mP*`y oNRSΕ\\wo^2)WTfφOo-$jCTt$f~Pn45~Zsi5̿dG@DpZk!4]AAy )X50Sguh=]G3-2IBs"Sⓟ6$[U \D;hY34ݪǹ:j:$;J2%[ڍ#jNwn"#1VLvCBs])@m7zMXqz7ٸ5ov0oز{G%Cl5Ы"kj-qZ߽yxrIFݤ8'LL2]05Rj =vlWF%+- %*S>LQKiau,Gx0P gTK `uDҬ@Y>twl3؍A<ԔL1 9EXIPGSv td$>:Anڮr.OmU^;y'%,+g8B{B]\%$j8l)* DF!z+1%NH>SR]UB-Cn\yJ1LG 9!,&Ad<¢G!(P ՒpZtkhO@6NOp_ Q:}{qαi0H >DUQ`MvTm6vY}lc)rE]5FlO SqM>L=@:Bh5$PנSSʊIVk^cp|Bʺ1 ,GC~Kp8Z'EeICVc,o8SS{$On1@DDaݜr<H2vN:vZW\I'.ƌzv]j0&Ђu#Qr(` 1I#LCj### l4{+w|-30P.FCR`Ih؄"$yaP3# 7aBC5qJhcuWv-:J lϲя0c1zfK5Q\RTQISL pAJ_O`r\ D͊e=(oyYc ,KHP aY#"M]q탣pW! syw7ܱQpw&JOI}'5U֌Cxˢv%JQLK-8 ׍RPْ2z ZC⟒_ˬӲeJC_rБw gYZ)Eh99G:/ybU_7@0#٥$ 6O%fzp/Қf%0L$΋4@6 GKY IS#ҝs :?j]nhiXghК3O^ >nfBA7|++~S4P02>XjO@9H\Ҭš^%N.\jť|5$u pX b J}GMާ\ ~Y |m)G7]U51i&ۈk/e9}n3$#G.CO~Ź?QpL6"iF|5#"1=_M|5 +h S&Sgc)'oA):4fA Ԣ*y9P*Cg'RIq(-FЌ+F)eQġ]90.5(9Yn] X FN\ơZp3׍RP*u)JCi) 1nơe 'R8R׈Ҹ/5ථ]D]TRC T#D TSDQ]RR9K.WKPOkcnЕKkB 뷷d.WLsCAEx6e4&ܪ_{QB}ėߔS_mC $QܼShRh_CK.+߉:=ʹxk!̕Z !?0t)s728nZd $VdN3'6=4URż 3Âf#2}2_$ʃGxRp\id6s_mF6q q˺s$7O\ݹMp)&{UW:v<1u-H7t]K\" _ bH%R{nbPY;/6QBc(@py`rYTgY&SUXdv6+RQ!^lj-`Alm~=#:Si((Sϓ2v* :e#m3FJ }QYg[-,z5HxaJd<\\ \@:-m]jJ ^TMK B׍Rn@i\gqr|(RYSZLPZJ j.2r(Ց.]@))hԑ(wR\HQEJyYYJCi)5H3¹rR$Ji NU&¤:yL0xTZ ,/Ι tJ(Kh8m(bQ:om^LDN-D;l|œKNRk/hy#]bUIwk^KwihSsO ~@^6>)zqm_ cnun~I|8}۔ݠ a|X_rD}SvFT(x͋Ǵ4jrT)[e:u6~||x־>w9I2a["pl()ZT{Jz:|yD̸.3D"JlsIM. t<^ܓ՗wg q5m3)<|"T'ohSW 3pw.wx]UL+Y,.JߛBYIʞ}n#i=娪ʀ<_~M~?Ϟ_n_3æ#GG=v<&Ւ<&<|n0͝8G:/P9~]nֿ _tp8䬋z߫=q{RpF;A`oz>S8[ VAփph?·!Ά(pptx_5N݃+ʕGv@̿~O \\a( (RfxFy C0,+G#O Hp#ԥ(^h#U)Z6Uow+#%~?T/&V.&MEC/m]vZuIJ0}uPݧSqDd-YjqykXwbE+7Q"7r{_^[3AJFf/[d@N7on\^O72ۃU-`'U׻:* 6 ۽,kGS")SI"O2턒BNcYh2^28[BzjT'U#+{N;^{Y^X ?Tzr.<:jX#˒yo?m|OL>a];a_C-8DUFRRcsk4p :pJ _䫇6^aXe,Z]]is~Ow'.3ɺ'&tW ը߿Ywf砿>}C)'Pd|{q?%GG0cyW!qM/ϓ~zWR$yxoO>g=/8VAC:=tc?|Нy=T(ߍڜWv3[v;=yT| )'ʿ{gQћCѾՔ~(_cThB\r ]%}~R¯H$YDOd~˟&GtlT_з8I2IU6O$(za=Eg_4Jf<̲+ 0}2OxFmFC;yXǴ!a6Bڗ1q ϔirE`(IL茄sI9*1rζ10Uzq&^s+"-Ɍ i?"Xx J!mɿtΈ`DYX-¦#Lh[pWDL7+4Hsa59(vf-.)oW+p|<iad8Q8$?0|Ujqby4C%ula{qR)R@[|(7OEN|ނ^DjtޜDPŹ|QWrq#8srlj]1̄MS:!#!ơ% r*9lvel#]:H=>ͳ' N~! *")g:"P8iLvLyε2̤}+7ooo[.{q1yEo6z!w <,~)߹Nλs_^Ň ?!Rr-2Be,gBΗJ~j(ttF"k27HOCW-<+L][o#r+Sk Aξ A$vd#iS/#ےn]f;u7_UX7yH+AEyONzC}$1 M&nΑ(34*sNUqJ`Җ5ڛJ>=}-E@.f87jAKzT+,/ ?&xFNjAӣܦٴ _ܦٗ+,eu6 9v#؉PrD.Ƴlq~snIz?ss9?w<ޭbGS\"AIYhv=M!{}!ot? $G>]{B{{З͗<<}r?Q.QkAϺ/wHM= 9Q=.ŽnGaF8!^}k65t|۱Fh~W J5sM}\m\TǿcL; \ s3mfo+y wtU4khۗ3Mj{Ugt{_ѫ>/>XUBY!F8Sܱ88 Rw톍):QvR#{zW?΃;;P# 6;c8aa|xvv9cs#}5XsXKU`InBu 7˷5\F/ Ŧ[q(i3X2T}~ 'e}aۇקV/K z]C68?`֒ht; "g?tF 5r͟y/2¡to;xmI U6WK/jwT5y-!lB4bڋ╣'u*c*DUn?U cQNOd-06[gKS 4(( FS?kJG 5ݻ? c8Qw\}M}nbP {t  y)}߾BPpd3XGrRV0*m&)eӽ}٣ӽ}TIW?H_'L~xy7{>(^E2aC'U=]4G+c{M20^oxV?4ki hC!`҉vִS;:պ&闿Z/efy +wN_!.!ł)֙n:5zL*I2d`$ 4\ 1 }5DNPnd\xnM,v^LßfxFddZ|B`d"M/58Tf'~6 $ Fp$_>4V5>?irTi8 =Қ{5`8 UhrgYN7Ηj{~7[ H 2FxZɱ5F]UwJ{y6_ސݸb|bd=A.2&,62s ϙ''e3݋QϏn wuƁ=7Dr~cכm t?sZ_\`F3[$R|>-1` KG.> BDP0R+= dn"bmNbpEL۬X (U{0}H93.}b}ןI K9fж!9҉iwho#FD.hG_̗h'mۯjZ0JĄ ]L4e4{z23P|(#ګ_L)|WZ$ 259DcB$dN= ֶx\p)G8[(l$J,XHƹxK8k9.^f_le #)׸*2)8aHTYQ==ƌ`fXڴM_S)EJARD4.0]RH1J63oLCQ 0rM` E0d3y(m"Bmx{bjפ˸6{HxI#"6Q<{`|\$!9 PXv5051I=7e/P(Q=~(v#$HYo> wզI` `4[c:(.;z ;*r#OfEȒ&&u*Sdh#FcΡed.zwEkۇ > !pFhZ"p,ޑNo|hWs?U vMȽYR`Jt?,h.Y-w4(5fiI;#Pm=!P/\/qZnihjT6 HZ}h-Z xVk0EK\h-sUŘRwCd!ƧN'/SNs+\`%V #: l,'\x#nQǔ\xFeYz %]C!9G§V6fk0Ȥ/Yj߽|nGF8RV;/\d ^jE6.ZXGhz?}A$Qw{XW&ZsJ5ͬ,=P%O= həTVyIޏK5RιX =&mB']icڡI!vO-_|a0<)r& 2̴נ QjD5FV}u2y~dX%c^4ѵ~d\wj6?sz$T3ڱO`@'~{jsh#oK2z+˻50'FQ3P2dd$"\ux l{̎KQOK8cWk FMNC v))~\\+eIFOΟ׶Y}vM)7bShZȣـR 5=(^vy(rY1Xm tw?-,Eמ.{,kxU朗vdp(dIc=EG5}M~3U22fc-3kiˁVùU{+~9S(d G. T#) 4VoMc{:4&}Ȋ2coǛ<ϵ[SŹz1Ix 3A `dN^RҌ]!7ya\#t !IL琈ܲBԊ:Qrǿ,k96^5'vړؓ0!ZüG$bdD;% Lc|Vkv[ZƧ(k-%769,GK'xK w\OyMU?ߟػf~U^L3Sܹj˦]z9/8/>'ivTƬ %W/+fIS$! grO,7I!a 3SrvGy$<4X{'CLs*˔,]`#!\(7LMkjhIc3[+ xF6}MĻ,sF^W!伯*.ng6}DyC*Nhp8)ةl>&I|C1??1vS1{veY'oD{yL>+s+qHyQ{r|LrUvԛ'Nnp 4,g hM h]M0`͢X˲m["KpܰF80)X!QF.%v'J/ Q1$|ъXe5,jL_%ې?+8Y.]`}"սNՑqNaF?<83Pb^3q]_^g]% "O,R7>UdhL8{n\ @J'k3g0k3g_P͍jumZT1}+f;cS?@SU*[!Uc=vfr=^$'- Y׫Bg+%_=~ow_-Ά(>Ko9?hO6{t%0RyStom,ڍil[ ɞ_h C\^&|"ogLBar^&捜G2ALxV s4g:acxEWN-SqX>9i{rk*tuMs67%{b*:i;&~9-WéRrm*;ϚbΘm֍w5^;ϱ]cT*6L^_?yqY-}h\kYkXۇք>jr_wtr[[ߜZhz嗗 i_/ng?Y8Rẇh6?< s/kvf2H*Lƭ On,_ql0FyI-4WF:&4>tkʃ*u.em_EtkCC~pmSmC81Э)NwA"rǸ[ѭ UNUL?O cw*˽Ys2Oh,zy}i]=G5͇x*#޽;sed74K!՚/xrI{˩_wnּÊcu96b;q}=<6Srsgj&|S֐eOڕ'O]VduJ,P'8̳s72A2J fOow=Y]~ B~ *2D)!F jmi-RVWqd\ÍkP qyg-R7VWRy-.7\t yYC5 oT%aQN4͌$` :,ԉDUq+4 \jH j<\  )bX,- g][Wl}py ^"lQk(k#M L0C^_rycNh . )XwĆgóϊtو !Ks#bq*L$GÀ䑶"!px`*21g#;۩W*'K82_ܷ`F  IPReDLI<6x0)hm.!ŝd:6o%_8 NA/] O5ɉ =h?1J΂OIo!ߜJus}Lr<~8@Vh]A  sŷ( ηF=CN'NB>S=6`Q;@98qX ŨP9Š*DFm"Q?NSL0J8L_{1Yb!tS+m/Q"~'θZoR}$GL=cˑrVipFJt{0hOyKX: ( wڢ^J=Z~:=a 0@&:HTHoSW KX "䓊>tE2 v@!57t: ĨCQla.$,VVE=WGO>?RP3:WR9"f(&i2#mѮ3>sv/܎)|xG!)sM#"Sϓщ휠9A \|ǟHKw-Ԁ{M,R0<gWhVnHpG'}6S)Js .ĵR#814H'>~ܔ`% ']'?+>~>J)8AC dW5Us#ܙWb\Os^/c!a`&OwLgm1:u~ڧ"\ _(tȃ{X@abA&Q/;un8h1[Aa8498wpG@]Bcr{wKRl4!ͣ3 rȣx*:3z߻۝ UJ\ Ht -D: F8e ''E&kO=U2|J=U jł'-&|V fK&PitKs1!!%0ػD1Dx&cłJancM;fi [=L^XQNDGl/X@>^7TS!JA~q#a, ɗk)֞=?7kY6<Aǁ&,/HS+~n6ێǿ`od|ϯ>+!|{!I|Hax`I9nI|2PN%Yw4ԘbQy:D5|rnTkjM)mͩb+nNnNMYrnQh}5 I2CVYRhVQq$-/T_jsa"Yb㓣Q a"*R"|'c"pG+;wwzajU itһFIFeC&2t]!@Pbm՘qjxgYgz_ٳ-{yffI$M :[r${.9ߢ$mYԾ!-6,VRv1 6/S֚4&!]Bn+CmYk5`W+ =?^.`A%~h0$}vE<Վq.u>O}CJ́zTQ\%Q)6*ahNzV#)UՄH*h\p<.?+fq)jD/x&m3ZO"zûi< zS'#Ki\Wf0`$ݤAy!,>D7M:>t .#|URT.PӓAй*͓a}gDPGX3Т'W:A$=rh"_^B˖-o_rvQ>w6Pk3T9+CZdVUQzB0bVy?Q XAYsC-Dh]{rѺT^Zx݉6:|oW'K ^Sr-K\-Z@VuӞtL$ۑDѴstbӻNhԆTķE嫳xvֽ@9{ӻ_pu:#٪67sh#M~gy}E-ޢqD:fΝџ $E0s\u?U5L1|:Mb(cԆw]]rH0𐡣z-Kŋt L/Җ7 Ďx=F7 4Smùkqom06G.kp_S)Bu=m\\81iOLH¥ sK>ZG%防q7/7#ˮ%> "=ܚ!R% v#l@8ahw&i J-(=#cJ3 Obk΁\6@D8NՕcsLӈw(29b|).4{z^Xܑ$- S- y&51H= w5u։sV^dƕ1*%y"f4A$)b2;p+_f_NE\E?y=EVA\x5/jW0L!f o\HEC8ӯ'ggvvxUz%oGhS%G~WxXu#J3x:r;/ A hg3(kgy?7|#̕pa,ǁsMIo8^,ڿzu0anZ\fHFXuk7Z0qeEW!%ZZ;A.\kӽz:(fnc)!>K/ΐxss{YzE -3KtjJqwZNFS)4 }5SSRSGČ.~rT1^M Ҝ"EyT!hxMvфC g4/.x1Ob̳&-l#]yT!-=VMש(RJښ?кLs^\ [lX1 CZqA-y0ՂB?nQt'2 &<[U2N7Nuα3 M ?0U(lvLvmMD5JnOf{.C[MSމϚ2JJٟ'EP-;S;AlgD$DE nu~]x}28 _ㅛyG>>CM`O'l7Nip%kw>1BâC)Çr1].^] FPheuQ53`ӆE6ߑvCrrhQ:__r)/IE9{M~5LFUd/ vO%$5cJ jgɤ["G7_,jC5ut!Ae}Gڈ)Z]A5X.ݚ\jN0[<44N|'zn>^>4Rgx`bI*SRey d`2c"v&HWO_o4ZGkwF!fâ"l-wdؚx^kx]ܮBQJ/G7+C 6XV9Lup:OWF#_v\gCy'>}YHGX|O=Jv D^W wdiq.\E#GIyDmYuQMSGR#3'iMSꔱΨ!h%"s:ɼ4QF aN JՁ5Yj ;gTR˄}/tjbJ_vjQnH p_Rمeo7 JqMYޱ.J(}M9ҳFi\7Dk ՛YQMR!P*$K;R!PSMQopJ}y?Gr`Rd^%5^LbF+ H$c.ʹ:SE% a;TG5WEXTnR _\=(C)幝i'PշT3|9Jx¬M0JCiN5\^Pz(+(|7h"rŊ//sFQ|OyP2շTsgRPJTЬ(%*9ܐ\z(8ӎto=u/]f/r~eRojC:.˹a- ~W}1Kc%:7! SVrqz^[|x7\-*)ui@%t[jet" 4˚ao̓brZjUuPF]C" PAqluIku(89QP?_MLII辺.]]2GYfZ R5 s |r@va jULVHs_𮪋rтfuqˇiDr>K46QQQsR:׌.Q$R cuo[ρPR}ڵ9ZC$#Ԯ^Ve$0KxT2kSye!P[j] mB2'463v`*D)շ 4s<թgV,f U njS:KRG8|!%\*ɥNx&D)%O a!Z/2&$ KSƥ,M u 魵FSfV:B䚪d`}LJ6aC\{vYK?~xxЫ8 Wo훇WMg.>$ O`Oz~8/Vx {%_W)b3Yx 8bR ӷ{w}F1L1qj9n{(f#EiQѧR)y Y.t URkeE$p40J̠x0H7dW2Gci6 T 0 {P(~)~.H7emi/txjǦįXi!}>тo6ܿ60o>ŐEC4P Z*ln?aCy{98B^uR9rX쾂AɃAPppi:xbt*H3ԅYq gM%.5 2M)g~囀^ddݬު1v'B{.K0MgjgPϗ@ifyB4 ._u1XGs#u =6IS>!/Wp @0@X?0pbe4KH5-8;)r&E`H[p'LGrfM]V'+yF˅4nsx󵫷{oG˻_pg PL + !֦/̃67.k{ ρ_zhm&Nnun'%0?JQ0K*=rIpݩ!G4ָ'joo4چ7$uSwӎB§'֚L*\?{fFy:VRX&msuNuK~ZBM'eE~':`nӮBz:%JR*C$fdgR.!ad߰˾~)Pǎw4N$~dh)YT4YG"B{`u͆c3J2<@0(QDrep!$$W.l!90g7a̮Gz밵*\]y#J4c? T&#7pk,X; [<T8/4+@bNHzH3JnB%Z+:=?lgbK]J32ϒyC) [$)^) 22!BA-xvWPZ QxvZy$ύ^­ T<BcqGqFKu@aEO`AvP^T8QQ8fYp%pfZ WJ*ɪ.%>|xGG[v2 kTjƅh<*4~4kk>bnjX}ӫmPǹHݔ LX8K`h\>վo@w5(M0% ,QB F3E]JNs.YXӅ?IU(J;%H>6(ӉJr 6룆yăJr }(pQq Z<(y<J%J.1: S8l(˕J)kO&ܥ[}t}/UǷ ǡedOՔ^q (5TjpDgĉ'JgQѩ2DkPـh.m-X3bL˳)BgP''Ji$v OS6{ &I"<^Q ,hʸR#S^"5=;Ԗd )xKI+ۃ r,)$lUEҪ+O_"b' Aal6ч,lM;'t'eУ'D,*i%n]^nn%iz IZQ*r'>_F'wd4QɨVŊpב}& Zm!סuʐ3\>X2Ӱ'n|y/d>5RK G,1q؜j-J-/stIJx,CjJ5ORYS$\\Pz(<8X_)J֣]nӄT0ApAk䮘sv8HY:%37GOSM?XʰKeT#FD )cYϮˏ}x,is" fd}/Umi͏|6 狰AJho:E\! N@xE$%VK8Pw?6ڈ3f V;UI֩Y˧lbKĩ(%:A:T d덧e2\0]F*ӜȦgd1_k{ؒLG!=S\N{ I (,H f L_Wzp]p˕&L憣=glД"%26O9qAץod,~˝1d1jEX8l2#⒟lWaں )M)r CñfɃ>R$7<(1`z8\Q";^".cbGiQ=eJiD95Q*,-eZ_o~6oT2F fHrf`u\E0:V;LOP P8jx=u%E)Nd%\|20TX)1V6Kf_W}vm(Bqö' Yo sa Œ `I7<7D\ËuLQEƔfMoYx4fbNQl5TY%O՟^[F5P? jؠ%́b|-lq؂P9.2gzʑ_x)Vط  ڲeّۓ-:eu4t;EdXŔ*I~28,"w' \%h| @S"5RaT"ԪX!BSD~{RQґ^FvxuأYk^Mb)&-SDq7 d2{},0kؼX6cyjp*%sIh0jfO~ DI+z\fkS '@. Ĩ}T56Wvg6-ML/w=Qu8k3ՈEhAA-{UMhٯGwb(mW^4 >-,l@[gL[*#9~56+9kEJ*mVd^jAԷ}Jgs"xbtM~)$%4õ`"p1~)f˻B>=j3ect gC'0:7~@4r -dꗴϗbV)ozeSh*(-,69vձhvo^wx<DdmBSR\|V0ZoMqQWXMAQE4Z|UM$b,o @M.\hm R фn( NhNLKe^GxF[ />3b|TBѡ J%+j UDc䩗;SZ4zH{ʋI =u^jJKLis}7h\~ayKmA(5dJ<~B?Fˊn9>0oX/M]n9*⺛wAVABNY k7u c};C€ucC$QD~oU95djI}\n( E9J:Ίt5ٷ# 9x?.f(U W΃V uj (:9@)/%]]*{U@YfN%ҡ|J߀o;X<ܻLG<Nj_5VSN*x1,F ׌Y`fn\f$.s%"2׺*1G.(_*i-3GuNM(lգ!zgQꇛix>;vIz8Ē +[]@(MIXL lwny:B0ULiRzyYN=bCT6!b{/QB7wxY ]"X ,ie%pꓡg11TOe֮pKZxݍB_$ ~{f ׺ W#Ʉ3Z}1`<,gV`cw l:3mސ>zc㩼wDV}_-S>t?)"i}jfEƑ9~*9}6i`}FBX:(i3hֶZ8xdFvdeSoz6^z^dD/פKΡ/߭>-߭ŏlػ7۶ܺOw(j`B/z#XCec/ ˆy ?u48>ż=]? >;{STP]swZ)'776V}Ab{/Gb/Q[:fьlh&&,iSSo|WM[|]`k EɞoIks#aŸY#@r&Q_ TZ.<@>ӔawZ` ӎX4'80VygP\n9o1Z-czmABL\36Yż9^0).MJ]/zo22^+  7? C*[ ( &6Fp{]-0RQ`hrz>5^HRH$y*QaB>ld3zt忪qC}nnj rƪ9e(;18.o*>$W˅ŧFɫlY^^߄\ɭ^Fo6Nǖ_ZE_Bh̡}MCӢNb >F v6,[R AOG竻qgo8WlĘT$Ot퓹w_7lozhʷ*vp IAQ$TTOF|!! Aqx,Kq | g:r?>9W =2O~5% =t!J4{ͱo*ѸblX7#ᛋ~;Y^hj0.JG|1?FoEMOj'p2^GT[¥IdUR6V1뢂N<vDpS.,Ne?Kq_~}r{גq@nmM(Z*EU7۫) .ֳR o>5 WT⭜1qb6zU#DiG=_yR?o$"Ss*ݳV N~lVrBݑu`$'? 7Ly^= XD:,^Q3ԻGJɜu7aT`GOqкB".JbJ(Hl7Sgtt>0J3gO!$ ƕq|lNi)89vFx:q#.$ק֛% >n6y#8dD FJnG Ooo6i(=8%\ >%簕H~||S[~||-sL~>X*?u}sWfu?ZlJށjņdpw!0!_ׯ(6Ͻ-0$,+OR".冫zމwM>|лDz`s{?7?<Gj)47.r7gZ(Iݴť89_u߳EΝJV&iMokkTd()r5DdmurP ~o4&aλ:5pE)΢{;:uRD }dA06WLUd; ?3h&y=O|>dQ7ϋɃx /S}ͳou=OoCʧ}o7Q~cEl֭BsFr$uRp0d aa5{]N#r ɤyN.ݢ t9_JLb W!ޕ6r$Rˮg  yXDGH,RR""(F[,3󋈌̈ykʙOn|BfBJonlL+0za_OOJԡ@L O;mT)})2=,ʫJ!\9X>?ˋ>We0SbK04 ,lRS&.^k)`ũnF\ +s)Fow4Qm*a`Ca%9EѸHN'_II cklNH-齞UBm>:Qr'F{]8ΐ@Lq1jIc%}x1ւu6n`E_Y}"R<fr`aM4OJ]̚b#M:kz  sC*pb|4Ҍ8XB#˓BnBY%D4Ux;J,DP#۩-1f>!5QRҮL,qFeh#ܑ镾ҒrD)XR!M 5݀wyP$dUD*)}TH3"Pnfy`;gǛk]#!ǫrø?_N/?rI5KUnӵ<`y@<˒" ~JF`s>//}5 @)mP! 6""Nz}Iq6s ct#Qr Ko<(@ƾRLP7wž޶pRU,TٗyP 6̷yp1 KNS>-v 伴|` v_)EAcp`rFcюO 2+"iE;SJ tt;M=!؎H"^)vpy7¬eT})lQRڄ)x6r. b)+z,K`&)j#O}޴ZFmo*.;UC(okxE⎠iRE(l m^Mnڿ#ZR#PenG61(M:9^CX E:}5<ֹ Q{\P).p*i&,p##aA)ddB ,ھ~Mʾі4J1E+%,#RjkCtGwxlz˾ع.as Y>"6JN`bB3bK݇ 'R65z+eoѓnsW۫D  [~ Qf&^CYPxWnѪꕕGT56 ov!F%lZ7wHs_ͤQ@:zT"0CKG)ה=Ŏ u<UδP &!쉗>wmY?ޖaoCHkԺ.l˨ϛ&ZV/qd#"FI0l6B#bC|H i=qU**x@F`B?PþBJ?,C.!T{lTsc+.GWsO{J z slOJ6*ׄ;q$aGYȻQ/hp鞝*BzT y-JsF[3==̥Rr}Hkn4x@@dGX> QBQpEPq u)>]̧}2(r0sL +e}ή3I:Z h)R?sow'cF>gT>'Q!Tmj4b?qP*?W h_fy[=47ϵR&HBM-5E/ [G_an=ʇUTe0sASMHn7\ő4y D؇j:-Tg}޶ }5 ;N+-J1rp76OI2 A VdYogUTM`NeC^ gA+ +RQU**$!7a5f~-NzX*;5o)sڣn:܊:9Genb|p~񵕛ֻal/ E0~0 GШ~ VRAP3g=̙%G_!Tmi[}Fpz]5oip;-m)`i < h2Ag >tqˌkl+q0;7Ӣb_%`j'LML)>GMTV/OY3}02w LIq%m?bj.$)DnQK0xsq1Ųϟ-sòekTK{^xw^,|d:l-ʺSXJ}Xf}ė+Gᠣi5loM 0$mԭGøORCk^ ಇ0 1%fl{vWj-UnDNK'F rEz}껑Q|%hWnwCR||̂;Ұ>sby]MvU =G4U`u*ɶު?څʊ>I*ge{ԄAgwFF^/fY,&lu"m0hQ1uI0e?LIk x3d1`W͓-+oiɺ3Җ僦},G$G`ѪS^Ɂ.^_n(˵uY`7`L%#ocW:1`+6Rz0rDkIB^6):RnBϞD5A-iNɬ/BT8vkvkAB^6JZw tA5Ai.)wTn͓nmH+ѽe"*oW}V90$=i6)pb&;FUGx2VU߷"(^*I*Qya|0MQjxF@ß%z቗Rc 4X۔l(%iFU֨= c o?Sy}.wN2[Ll:T3N(X"w~/|?ҳ@{f콂O{t%I~e:[RmflYl?Sp%gʕK*b`AYQ[u`U%cmA7&4źӵ5r MIt0r lMQIUU).k |.ۉ?{]֢ \#Ha$Z _bp_0+DZKlnŗRKr9P>*+0lсcb!~9`XO3t"- ?}`"̗Q ,;h$Zr<#y8U2fMo res EAE{@A>ޱ|mcH$@ʦN/7O$-5dzZ+V*^ |yd8 HZy6ZdPȷ> iGdgێ){BRMMvx1G7-vqF/p)T=w, V *#2"Xŀ< >+Αlpv*ec)FMEI/qcvj0lE~Zia꧅_]V b0ɩE -tKu[4wZ38X!XOq?Kn"˫s7$[]gn_N+./%ׁo".fm)4+Wʅ﹕E.]p+D}E#LxjDb(Gw=nF{p_"]Cߌ^"5)bqƾps>@{IvՉ\rW:G\N.<Ù!9EMDnޱ\A=E lc6[%!E-6BCh-Cނş [| ;ڮ uIì~ǎZz@>-:]8KeH|yֺjEx5YQ"U D}Ot9(9xwW !(~vxϭkMd`@q֔,yatIdTTSCʇ)c\ cUKkj%zҳr*%W,z H79 _LŽԙˋ7kqsOE~62ý5 wO?%!~9d]15KNK!wm.^v[l\r8pN~msaWvgf7حUn67vZgMz[-Ծxe=1z*Xqw3/)K%z.uu_/[\]zkz3' 5YqW}Y cFtB5 ITC3$g*/ʬ`0ܺ;S&ʐlqRAIlj9"t_QDx '辧iK/al_fl!iSB;l JlP˵[pjqXMOMkzm-'aQN(_/yI W)J T^OD;1ٗ_dS12ࣝi!)B· bK;Y햭Gg%>8:yݨft8p8@OLCo1[F_gKJ;, Qw|dz}aF3MBhT?l y{̼QNִY) [֞SԱOiY%",ط.rtv‰}_GOwTs]IV9BB~"#SNFh7 $[,>vDrڈZŧHn)$w.Q2%{M;ľ#Dsinl<[ hođ~{OE|l̚E7J}~k{ CXFj޽-R z?s~8qvSr>bdj_5DF-s|x|Ȋ5ȞCSΣF5ޡC;vaFk 4K래%0X0e,#`eV}>sw(hIyEҊҢDvҺ|Xx:R*$?Q6K!dZc9Bt ڈXFsg>'F >#XûF(\okn5MkO,21o)@6mlvo;g7 RGD-bbC$cFcω(GCإr4QLƅgDp;h' zb8QpQ1:H7yb8A"RA†MVV6,}|5%H7Ou) p/aJYbCa*‡L.$ݗx9heV7)2Fw__^½/7ʾl{&B?"P􍙿q6 =HhJCY#4 ~6=!yniGn)[)^=-ZYEfU""mQ+#(,5cRT%1e^WX`]ڨ6J*]iCѳʳF Ys@ͩ(\YsT.Put (GQT(3C, wyZ!Vq]ss[~}~]t?seתzX*@ 7zBSJ(7ؽ;cǫYs \# jn5Gۗ8EAs@xÖrB/S&^PkYLV7Ӽﻙ|SWҖ>t--m}vztmv)Ϳ]nщؓR"(|4=Opis6/OdO 0-X룮6B.9ƺ`@L$(7?( H`( U JpZeEɳ7x'WK6p#pj{s9p &QIy3dTx3 }Œq5ˋG6Vxp$E I>WE Bm]4S~{SS%\9 )`̀s]Tyf -䜣F(ېVcy[j 3ӭV9[ O/WNs;mѺܕ+]i kP/( 9hHP?D+ 5Fc*V(P//9`yYu VFt=}IzM~ZXŜZ[TN*J)Ev \pb,ϵ`V+̡kijeye2%; Z0k`lJ)Um*8ݪt__meK%0l}kO"yM!6ّ |G  <zD A[J>d.rĻ֏{X柒V%J1c-LȤA$U`u>a/>/ؾ1Bʋ?Vvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005005771615144622047017715 0ustar rootrootFeb 16 12:52:51 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 12:52:51 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:51 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 12:52:52 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 12:52:53 crc kubenswrapper[4740]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.042399 4740 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046198 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046222 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046232 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046240 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046247 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046253 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046259 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046268 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046282 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046289 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046295 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046301 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046307 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046313 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046318 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046323 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046329 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046334 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046341 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046347 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046354 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046361 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046368 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046375 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046382 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046388 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046395 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046402 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046408 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046415 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046424 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046434 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046443 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046450 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046460 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046468 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046475 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046485 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046494 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046502 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046510 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046516 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046521 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046527 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046533 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046538 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046544 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046549 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046554 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046559 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046565 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046570 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046575 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.046580 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048355 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048380 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048387 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048393 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048399 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048405 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048410 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048416 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048423 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048429 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048435 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048442 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048447 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048455 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048461 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048466 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.048472 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049326 4740 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049354 4740 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049365 4740 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049374 4740 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049383 4740 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049390 4740 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049399 4740 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049407 4740 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049415 4740 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049423 4740 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049437 4740 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049444 4740 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049450 4740 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049457 4740 flags.go:64] FLAG: --cgroup-root="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049463 4740 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049471 4740 flags.go:64] FLAG: --client-ca-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049478 4740 flags.go:64] FLAG: --cloud-config="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049513 4740 flags.go:64] FLAG: --cloud-provider="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049521 4740 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049531 4740 flags.go:64] FLAG: --cluster-domain="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049537 4740 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049543 4740 flags.go:64] FLAG: --config-dir="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049550 4740 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049558 4740 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049581 4740 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049595 4740 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049604 4740 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049805 4740 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049845 4740 flags.go:64] FLAG: --contention-profiling="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049854 4740 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049862 4740 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049871 4740 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049878 4740 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049886 4740 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049893 4740 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049900 4740 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049906 4740 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049912 4740 flags.go:64] FLAG: --enable-server="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049919 4740 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049927 4740 flags.go:64] FLAG: --event-burst="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049934 4740 flags.go:64] FLAG: --event-qps="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049941 4740 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049947 4740 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049953 4740 flags.go:64] FLAG: --eviction-hard="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049985 4740 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.049996 4740 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050004 4740 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050011 4740 flags.go:64] FLAG: --eviction-soft="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050017 4740 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050023 4740 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050030 4740 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050036 4740 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050042 4740 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050048 4740 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050054 4740 flags.go:64] FLAG: --feature-gates="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050061 4740 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050069 4740 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050075 4740 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050081 4740 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050088 4740 flags.go:64] FLAG: --healthz-port="10248" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050095 4740 flags.go:64] FLAG: --help="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050101 4740 flags.go:64] FLAG: --hostname-override="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050108 4740 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050115 4740 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050121 4740 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050128 4740 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050134 4740 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050140 4740 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050146 4740 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050152 4740 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050159 4740 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050165 4740 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050172 4740 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050178 4740 flags.go:64] FLAG: --kube-reserved="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050184 4740 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050190 4740 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050197 4740 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050204 4740 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050211 4740 flags.go:64] FLAG: --lock-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050219 4740 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050226 4740 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050236 4740 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050248 4740 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050254 4740 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050260 4740 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050267 4740 flags.go:64] FLAG: --logging-format="text" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050273 4740 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050280 4740 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050285 4740 flags.go:64] FLAG: --manifest-url="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050292 4740 flags.go:64] FLAG: --manifest-url-header="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050300 4740 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050306 4740 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050314 4740 flags.go:64] FLAG: --max-pods="110" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050320 4740 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050326 4740 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050332 4740 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050338 4740 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050344 4740 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050350 4740 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050356 4740 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050370 4740 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050376 4740 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050383 4740 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050390 4740 flags.go:64] FLAG: --pod-cidr="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050395 4740 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050404 4740 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050411 4740 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050417 4740 flags.go:64] FLAG: --pods-per-core="0" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050422 4740 flags.go:64] FLAG: --port="10250" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050429 4740 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050435 4740 flags.go:64] FLAG: --provider-id="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050440 4740 flags.go:64] FLAG: --qos-reserved="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050447 4740 flags.go:64] FLAG: --read-only-port="10255" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050453 4740 flags.go:64] FLAG: --register-node="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050460 4740 flags.go:64] FLAG: --register-schedulable="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050466 4740 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050476 4740 flags.go:64] FLAG: --registry-burst="10" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050483 4740 flags.go:64] FLAG: --registry-qps="5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050489 4740 flags.go:64] FLAG: --reserved-cpus="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050495 4740 flags.go:64] FLAG: --reserved-memory="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050502 4740 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050508 4740 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050515 4740 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050521 4740 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050527 4740 flags.go:64] FLAG: --runonce="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050533 4740 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050539 4740 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050545 4740 flags.go:64] FLAG: --seccomp-default="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050551 4740 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050557 4740 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050563 4740 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050570 4740 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050576 4740 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050582 4740 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050588 4740 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050594 4740 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050600 4740 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050606 4740 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050613 4740 flags.go:64] FLAG: --system-cgroups="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050618 4740 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050627 4740 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050633 4740 flags.go:64] FLAG: --tls-cert-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050639 4740 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050647 4740 flags.go:64] FLAG: --tls-min-version="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050653 4740 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050659 4740 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050665 4740 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050671 4740 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050677 4740 flags.go:64] FLAG: --v="2" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050685 4740 flags.go:64] FLAG: --version="false" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050692 4740 flags.go:64] FLAG: --vmodule="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050699 4740 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.050706 4740 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050876 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050884 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050890 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050896 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050902 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050907 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050915 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050921 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050927 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050933 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050939 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050944 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050952 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050958 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050964 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050969 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050974 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050979 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050985 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050990 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.050995 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051001 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051006 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051012 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051017 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051022 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051029 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051036 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051042 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051048 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051054 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051059 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051066 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051072 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051077 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051083 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051089 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051094 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051104 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051109 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051115 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051120 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051125 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051130 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051135 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051141 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051146 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051151 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051156 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051161 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051166 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051172 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051177 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051182 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051187 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051193 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051199 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051205 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051212 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051219 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051225 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051232 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051238 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051244 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051249 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051254 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051259 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051264 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051270 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051275 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.051283 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.052129 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.064855 4740 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.065276 4740 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065389 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065404 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065447 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065454 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065460 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065467 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065472 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065478 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065483 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065489 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065494 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065499 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065504 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065510 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065515 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065521 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065527 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065534 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065541 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065547 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065555 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065561 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065566 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065572 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065579 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065587 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065593 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065599 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065605 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065613 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065619 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065625 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065631 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065637 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065644 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065649 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065654 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065660 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065666 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065671 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065677 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065682 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065687 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065692 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065699 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065706 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065713 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065722 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065730 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065736 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065742 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065748 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065755 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065761 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065767 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065772 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065777 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065783 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065790 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065796 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065803 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065831 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065838 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065846 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065854 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065861 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065868 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065873 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065880 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065887 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.065894 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.065906 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066108 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066122 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066129 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066137 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066144 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066152 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066160 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066167 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066174 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066180 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066185 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066191 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066196 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066202 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066208 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066213 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066220 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066227 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066233 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066240 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066245 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066251 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066257 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066262 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066268 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066274 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066279 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066284 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066290 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066297 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066305 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066311 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066318 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066324 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066330 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066336 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066342 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066349 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066354 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066361 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066368 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066374 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066380 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066386 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066391 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066397 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066402 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066408 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066413 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066419 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066426 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066431 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066437 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066443 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066449 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066455 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066460 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066465 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066471 4740 feature_gate.go:330] unrecognized feature gate: Example Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066476 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066481 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066486 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066492 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066498 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066505 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066510 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066516 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066521 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066527 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066532 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.066537 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.066546 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.066802 4740 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.071983 4740 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.072093 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.074971 4740 server.go:997] "Starting client certificate rotation" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075003 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075272 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 01:10:35.956269496 +0000 UTC Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.075432 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.101669 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.105333 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.105719 4740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.128375 4740 log.go:25] "Validated CRI v1 runtime API" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.161858 4740 log.go:25] "Validated CRI v1 image API" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.165242 4740 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.171962 4740 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-12-48-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.172022 4740 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194216 4740 manager.go:217] Machine: {Timestamp:2026-02-16 12:52:53.190132822 +0000 UTC m=+0.566481583 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7ed304a0-359f-427d-948c-1ad2fcad2d68 BootID:16811f3b-c2df-4c7d-9862-6b10264a49b2 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e6:34:a4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e6:34:a4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:53:e3:42 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d4:45:01 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ee:d7:5f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:74:77:94 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:c1:bf:e6:94:28 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:6f:e4:a3:af:f8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194519 4740 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.194798 4740 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195348 4740 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195564 4740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195599 4740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195860 4740 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.195872 4740 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.196655 4740 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.196710 4740 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.197744 4740 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.198030 4740 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202532 4740 kubelet.go:418] "Attempting to sync node with API server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202572 4740 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202606 4740 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202626 4740 kubelet.go:324] "Adding apiserver pod source" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.202645 4740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.207267 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.207345 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.207474 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.207515 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.208891 4740 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.210684 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.213201 4740 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214888 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214936 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214952 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214966 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.214986 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215000 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215013 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215034 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215051 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215065 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215084 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.215098 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.216130 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.216945 4740 server.go:1280] "Started kubelet" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.217788 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218478 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218543 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:24:57.649725814 +0000 UTC Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218687 4740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218704 4740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218759 4740 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218776 4740 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 12:52:53 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.218722 4740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.219403 4740 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.220240 4740 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.220293 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.220477 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.220570 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.221880 4740 factory.go:55] Registering systemd factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.221914 4740 factory.go:221] Registration of the systemd container factory successfully Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.222231 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222397 4740 factory.go:153] Registering CRI-O factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222426 4740 factory.go:221] Registration of the crio container factory successfully Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222547 4740 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222583 4740 factory.go:103] Registering Raw factory Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.222616 4740 manager.go:1196] Started watching for new ooms in manager Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.232708 4740 manager.go:319] Starting recovery of all containers Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.233856 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894bb31254a9cd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,LastTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.237247 4740 server.go:460] "Adding debug handlers to kubelet server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243146 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243203 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243220 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243234 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243247 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243261 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243273 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243285 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243322 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243334 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243344 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243356 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243369 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243384 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243396 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243407 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243429 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243441 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243452 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243463 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243473 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243509 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243523 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243535 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243572 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243589 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243606 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243618 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243629 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243641 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243651 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243661 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243674 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243686 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.243703 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244921 4740 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244949 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244961 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244975 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244984 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.244993 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245002 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245012 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245021 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245030 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245040 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245049 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245059 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245069 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245079 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245089 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245098 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245108 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245120 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245131 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245141 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245150 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245171 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245180 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245189 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245199 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245208 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245217 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245226 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245236 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245250 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245261 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245272 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245285 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245297 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245310 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245321 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245332 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245342 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245389 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245406 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245420 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245432 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245446 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245460 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245474 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245485 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245499 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245510 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245522 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245535 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245546 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245558 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245567 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245580 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245592 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245604 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245615 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245626 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245640 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245652 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245666 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245678 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245690 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245701 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245713 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245726 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245740 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245752 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245766 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245784 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245797 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245826 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245839 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245851 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245865 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245878 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245892 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245905 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245917 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245929 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245940 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245953 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245963 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245975 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.245989 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246000 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246012 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246024 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246035 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246046 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246060 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246071 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246085 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246096 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246109 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246121 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246133 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246145 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246157 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246174 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246188 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246199 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246212 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246224 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246236 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246249 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246260 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246272 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246287 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246299 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246310 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246357 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246384 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246396 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246408 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246428 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246442 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246453 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246464 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246475 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246486 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246503 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246516 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246527 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246540 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246553 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246565 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246575 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246586 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246599 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246611 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246624 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246638 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246649 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246661 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246672 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246684 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246696 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246708 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246720 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246731 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246741 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246754 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246766 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246777 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246788 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.246799 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249882 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249922 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249940 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249955 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249970 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249983 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.249999 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250015 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250028 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250042 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250054 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250069 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250082 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250095 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250109 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250121 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250133 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250146 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250160 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250174 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250186 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250197 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250211 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250225 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250238 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250251 4740 reconstruct.go:97] "Volume reconstruction finished" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250261 4740 reconciler.go:26] "Reconciler: start to sync state" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.250782 4740 manager.go:324] Recovery completed Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.259523 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.261524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262757 4740 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262770 4740 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.262794 4740 state_mem.go:36] "Initialized new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.278097 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.278998 4740 policy_none.go:49] "None policy: Start" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279840 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279895 4740 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.279926 4740 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.279983 4740 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.280302 4740 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.280331 4740 state_mem.go:35] "Initializing new in-memory state store" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.283090 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.283261 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.320494 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331905 4740 manager.go:334] "Starting Device Plugin manager" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331978 4740 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.331994 4740 server.go:79] "Starting device plugin registration server" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332542 4740 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332565 4740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332783 4740 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332887 4740 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.332897 4740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.345837 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.380536 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.380746 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382317 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382524 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.382560 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.383666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384310 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.384460 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.385418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386336 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386500 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.386555 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387501 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387658 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.387684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388533 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.388571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.389488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.423119 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.433208 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.434681 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.435344 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452714 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452805 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.452976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.453155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555540 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555599 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555717 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555846 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555861 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555793 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.555987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556159 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.556083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.636069 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.637690 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.638178 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.705992 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.719571 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.735963 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.751468 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: I0216 12:52:53.756982 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.820290 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84 WatchSource:0}: Error finding container 1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84: Status 404 returned error can't find the container with id 1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84 Feb 16 12:52:53 crc kubenswrapper[4740]: E0216 12:52:53.824538 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.825070 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e WatchSource:0}: Error finding container e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e: Status 404 returned error can't find the container with id e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.830448 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa WatchSource:0}: Error finding container 9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa: Status 404 returned error can't find the container with id 9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.833001 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c WatchSource:0}: Error finding container 5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c: Status 404 returned error can't find the container with id 5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c Feb 16 12:52:53 crc kubenswrapper[4740]: W0216 12:52:53.837971 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a WatchSource:0}: Error finding container f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a: Status 404 returned error can't find the container with id f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.034669 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.034785 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.038386 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.040447 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.040792 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.127993 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.128123 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.218997 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:58:18.48138327 +0000 UTC Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.219577 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.285379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9f7985b739c49d420b021824a704b43375293200b8bd8eb6543db15bdc3e5caa"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.286897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e81511cf6e188314d2a11b7afacc2cd18f98ada7b3270e5b417b9fa18645f95e"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.289464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f86a1464d7330a37b16fe71f14f44e4fbabcabf113ee6c93d11926885851d84"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.291090 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f8209d6b12ca32ff2f68734d92946675fe376c04fef6b442553ae4f13cb2867a"} Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.293566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5023a39607ec953c0fc9398294d01e07306983a122d88a6d2dd40d1d7ba4b79c"} Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.625720 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.660350 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.660441 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: W0216 12:52:54.702528 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.702594 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.841053 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:54 crc kubenswrapper[4740]: I0216 12:52:54.842982 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:54 crc kubenswrapper[4740]: E0216 12:52:54.843354 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.159226 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.161959 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.219300 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.219377 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:58:29.436750885 +0000 UTC Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299004 4740 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299111 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.299103 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.300307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302452 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.302483 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.303999 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.304070 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.304187 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.305361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306629 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.306707 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.308337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310106 4740 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9" exitCode=0 Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9"} Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.310230 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.311123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.316214 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:55 crc kubenswrapper[4740]: I0216 12:52:55.318703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.607138 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894bb31254a9cd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,LastTimestamp:2026-02-16 12:52:53.216885973 +0000 UTC m=+0.593234774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:52:55 crc kubenswrapper[4740]: W0216 12:52:55.798453 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:55 crc kubenswrapper[4740]: E0216 12:52:55.798557 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.106234 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.106365 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.219530 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:58:01.351815677 +0000 UTC Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.219597 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.227240 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315763 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.315914 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.317359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.320487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.320597 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.321633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324096 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712" exitCode=0 Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324314 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.324310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.325866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330727 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330825 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.330843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.333036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e"} Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.333137 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.334135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.363418 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.363521 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.444127 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:56 crc kubenswrapper[4740]: I0216 12:52:56.445752 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.446305 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 16 12:52:56 crc kubenswrapper[4740]: W0216 12:52:56.731807 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:56 crc kubenswrapper[4740]: E0216 12:52:56.731945 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.219982 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:18:14.431585163 +0000 UTC Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.220728 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339020 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f" exitCode=0 Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f"} Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.339137 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.340209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.344967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd"} Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345073 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345132 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345202 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345510 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.345135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.346472 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347284 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.347990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:57 crc kubenswrapper[4740]: I0216 12:52:57.348295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.221023 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:43:08.741594689 +0000 UTC Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9c0eeeb27377d61443f7754bfac1381f13b4f3a82ba264f61d1f9e1f226ec6d"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9c8099bb5eba996bc3d8d2e863bd70633bd9b0254c3fe5821fc4793cf046d45"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.352282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fad9616d37e41997011d3984ba488307ed05ea1256b99562f50f2536d76cec56"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.355348 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.359960 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" exitCode=255 Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd"} Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360160 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.360318 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.361451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.362181 4740 scope.go:117] "RemoveContainer" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.691593 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.691944 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:58 crc kubenswrapper[4740]: I0216 12:52:58.693326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.221553 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:44:42.06735439 +0000 UTC Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.329123 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.371766 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.372613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84842791f89c497895c2a953a0e71d29b46aa338838efc39995fc2b0ab32ca89"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.372730 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ec7459bbcca61588e290cb35a3f34e0554be0a8ecdb013266b263c0c23ec9cf"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.373508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.377097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.380871 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878"} Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.381054 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.381296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.382762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.647203 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:52:59 crc kubenswrapper[4740]: I0216 12:52:59.649725 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.222405 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:58:26.892153149 +0000 UTC Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.305094 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.305440 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.307155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383387 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383459 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.383516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384754 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384836 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:00 crc kubenswrapper[4740]: I0216 12:53:00.384951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.036243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.223296 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:44:00.181084665 +0000 UTC Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.269609 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.386308 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.387925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.692435 4740 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:53:01 crc kubenswrapper[4740]: I0216 12:53:01.692529 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.224384 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:38:14.571024021 +0000 UTC Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.389223 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:02 crc kubenswrapper[4740]: I0216 12:53:02.390788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.020232 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.020513 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.022315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.224482 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:55:00.665471754 +0000 UTC Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.258775 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.259024 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:03 crc kubenswrapper[4740]: I0216 12:53:03.260286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:03 crc kubenswrapper[4740]: E0216 12:53:03.346231 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.225258 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:40:39.501111705 +0000 UTC Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.931655 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.931972 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.934075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:04 crc kubenswrapper[4740]: I0216 12:53:04.938004 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.225698 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:13:42.063688877 +0000 UTC Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.398059 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.403600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:05 crc kubenswrapper[4740]: I0216 12:53:05.405048 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.226318 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:19:31.642197315 +0000 UTC Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.400597 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.401976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.402015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:06 crc kubenswrapper[4740]: I0216 12:53:06.402027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:07 crc kubenswrapper[4740]: I0216 12:53:07.226888 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:58:45.150159586 +0000 UTC Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.115587 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.115969 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.123193 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.123265 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 12:53:08 crc kubenswrapper[4740]: I0216 12:53:08.227183 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:08:54.702629141 +0000 UTC Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227280 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:12:32.739362457 +0000 UTC Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227393 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.227646 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.228966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.229024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.229039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.252117 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.410040 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.411513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.411769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.412024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:09 crc kubenswrapper[4740]: I0216 12:53:09.431296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.227539 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:16:13.757906387 +0000 UTC Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.412974 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414422 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:10 crc kubenswrapper[4740]: I0216 12:53:10.414498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042326 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042500 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042944 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.042996 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.043804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.048027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.228300 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:42:25.795001859 +0000 UTC Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.415404 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.416392 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.416479 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.417499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.692775 4740 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:53:11 crc kubenswrapper[4740]: I0216 12:53:11.692884 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 12:53:12 crc kubenswrapper[4740]: I0216 12:53:12.228796 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:45:34.910159235 +0000 UTC Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.110168 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.110365 4740 trace.go:236] Trace[1864731534]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:00.072) (total time: 13038ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1864731534]: ---"Objects listed" error: 13038ms (12:53:13.110) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1864731534]: [13.038249127s] [13.038249127s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.110760 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.112080 4740 trace.go:236] Trace[1901863196]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.577) (total time: 11534ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1901863196]: ---"Objects listed" error: 11534ms (12:53:13.111) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1901863196]: [11.534229283s] [11.534229283s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.115162 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.115796 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119355 4740 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119511 4740 trace.go:236] Trace[157355179]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.565) (total time: 11553ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[157355179]: ---"Objects listed" error: 11553ms (12:53:13.119) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[157355179]: [11.553705717s] [11.553705717s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.119546 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.120524 4740 trace.go:236] Trace[1549146929]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 12:53:01.864) (total time: 11256ms): Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1549146929]: ---"Objects listed" error: 11256ms (12:53:13.120) Feb 16 12:53:13 crc kubenswrapper[4740]: Trace[1549146929]: [11.256262812s] [11.256262812s] END Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.120843 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.125618 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.146749 4740 csr.go:261] certificate signing request csr-85kdb is approved, waiting to be issued Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.155647 4740 csr.go:257] certificate signing request csr-85kdb is issued Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.215615 4740 apiserver.go:52] "Watching apiserver" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.219173 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.219570 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220126 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220568 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.220737 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.220806 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.221245 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.221589 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.221597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.221645 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227113 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227188 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227755 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227861 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227385 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.228003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227461 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.227517 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.229200 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:54:21.547026452 +0000 UTC Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.255783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.271106 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.286876 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.290914 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ttqrb"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.292157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.295584 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.295777 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.297949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.305047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.314730 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320426 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320525 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320585 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.320665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.321837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.321883 4740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322430 4740 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.322567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.326135 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.328538 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.331629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.339301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347216 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347253 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347272 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.347368 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.84733182 +0000 UTC m=+21.223680541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.350422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355785 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355844 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355867 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.355949 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.855924487 +0000 UTC m=+21.232273218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.360093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.364464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.385491 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.416710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421136 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421194 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421220 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421321 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421379 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421402 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421421 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421566 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421764 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421826 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421851 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421930 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.421940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422038 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422064 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422088 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422148 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422166 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422222 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422245 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422297 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422548 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422608 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422642 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422677 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422756 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422828 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422860 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422904 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422925 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423540 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423709 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423732 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423750 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423766 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423822 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423859 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423962 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424042 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424079 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424099 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424118 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424168 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424187 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424205 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424224 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424257 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424277 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424350 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424366 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424383 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424399 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424415 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424495 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424518 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424538 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424556 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424576 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424593 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424611 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424627 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424644 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424664 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424689 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424788 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424805 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424890 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424912 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424931 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424946 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424964 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425011 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425047 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425065 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425163 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425191 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425208 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425224 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425256 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425389 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425446 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425479 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425526 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425565 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425680 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425758 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425783 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425871 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425888 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426028 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426095 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426118 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426189 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426209 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426231 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426247 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426269 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426284 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426319 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426352 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426384 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426505 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426521 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426558 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426590 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426624 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426729 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426784 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.426945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427063 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427106 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427123 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427140 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427166 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427184 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427212 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427239 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427284 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427328 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427643 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427910 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427942 4740 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427959 4740 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427972 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427987 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427999 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422087 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422358 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422408 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.422431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423357 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423405 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423585 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.423935 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424003 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429113 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.424656 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425306 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.425729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427713 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427933 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.427978 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.428118 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428498 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428874 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.428903 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429399 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429615 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.429757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430206 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430350 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.430603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.431216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.434576 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.434596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435023 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435168 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435346 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.435611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436094 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.436720 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437021 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437672 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.437885 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.438084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.438139 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.439799 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.439928 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440626 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.440846 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.441546 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.441556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.442139 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.442243 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448349 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448549 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.448918 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.449628 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450149 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450640 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.450867 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452544 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452730 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.452798 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453441 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453695 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.453909 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454119 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.454929 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455453 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458652 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.455827 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.456526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.456897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457210 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457640 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.457964 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458543 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458802 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.458870 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459040 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459333 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459339 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459475 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459559 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459570 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.959554542 +0000 UTC m=+21.335903263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459677 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.459869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.959800974 +0000 UTC m=+21.336149695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460019 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.460166 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.459479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462079 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462360 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462649 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462685 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462726 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462805 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.462919 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.463662 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.464519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.464606 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465214 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465335 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465646 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465908 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.465794 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.466575 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.467959 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468721 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.468013 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:13.967973167 +0000 UTC m=+21.344321888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468085 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468482 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468256 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468728 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.468264 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.469204 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.469540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480090 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.480953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.481757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.482703 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.482990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.483314 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.484016 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.484702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.485064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486316 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486477 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486504 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486480 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486781 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.486889 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487518 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487765 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.487849 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488180 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488538 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488602 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488609 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.488747 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.489186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.494761 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.495068 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.498457 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.498507 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.499058 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500166 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.500472 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.507491 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.508795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.518972 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.519381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.528356 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42324c80-0f4d-4a2b-8374-fa2358bc8217-hosts-file\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529396 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529546 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529572 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529588 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529616 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529630 4740 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529644 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529658 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529673 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529686 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529701 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529728 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529758 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529770 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529780 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529791 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529804 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529861 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529875 4740 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529888 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529901 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529914 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529942 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529956 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529969 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529982 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.529994 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530008 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530025 4740 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530040 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530055 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530069 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530081 4740 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530096 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530109 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530122 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530147 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530159 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530174 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530187 4740 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530200 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530212 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530225 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530238 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530251 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530266 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530282 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530294 4740 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530307 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530320 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530335 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530349 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530370 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530382 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530398 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530415 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530428 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530441 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530454 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530467 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530479 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530491 4740 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530506 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530517 4740 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530529 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530541 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530554 4740 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530566 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530579 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530592 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530607 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530621 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530653 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530668 4740 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530695 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530709 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530721 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530734 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530747 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530760 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530774 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530787 4740 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530800 4740 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530841 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530857 4740 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530871 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530885 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530898 4740 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530911 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530924 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530938 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530950 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530963 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530976 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.530990 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531004 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531021 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531035 4740 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531049 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531064 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531078 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531091 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531104 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531117 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531131 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531144 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531159 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531173 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531187 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531200 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531212 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531222 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531232 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531242 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531255 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531267 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531284 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531298 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531310 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531321 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531331 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531342 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531354 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531363 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531373 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531382 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531391 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531402 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531411 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531420 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531430 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531440 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531450 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531461 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531472 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531483 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531493 4740 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531539 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531552 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531563 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531572 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531591 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531601 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531611 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531624 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531642 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531655 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531666 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531678 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531692 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531706 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531717 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.531728 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532296 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532312 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532350 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532363 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532373 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532382 4740 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532392 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532407 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532431 4740 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532441 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532450 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532460 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532469 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532479 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532504 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532513 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532523 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532532 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532543 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532552 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532564 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532596 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532609 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532623 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532638 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.532669 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.538325 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.540708 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.548699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.549918 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnxf\" (UniqueName: \"kubernetes.io/projected/42324c80-0f4d-4a2b-8374-fa2358bc8217-kube-api-access-mxnxf\") pod \"node-resolver-ttqrb\" (UID: \"42324c80-0f4d-4a2b-8374-fa2358bc8217\") " pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.549882 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.555634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.556224 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.563439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.576738 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: W0216 12:53:13.582000 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d WatchSource:0}: Error finding container ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d: Status 404 returned error can't find the container with id ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.591723 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.603625 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.609319 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttqrb" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.614430 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.625702 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.633349 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.647308 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q4qtj"] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.647690 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650099 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650275 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.650300 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.653965 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.654127 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.665994 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.677547 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.693784 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.704560 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.705385 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.706025 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.713138 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.717136 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.725849 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734717 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734843 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734860 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734875 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.734887 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.735051 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.754165 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.768653 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835749 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.835933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.836763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a46e0708-a1b9-4055-8abc-b3d8de6e5245-mcd-auth-proxy-config\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.836833 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a46e0708-a1b9-4055-8abc-b3d8de6e5245-rootfs\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.842482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a46e0708-a1b9-4055-8abc-b3d8de6e5245-proxy-tls\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.851932 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plntj\" (UniqueName: \"kubernetes.io/projected/a46e0708-a1b9-4055-8abc-b3d8de6e5245-kube-api-access-plntj\") pod \"machine-config-daemon-q4qtj\" (UID: \"a46e0708-a1b9-4055-8abc-b3d8de6e5245\") " pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.936325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.936442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936663 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936686 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936700 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936737 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936762 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:14.93674135 +0000 UTC m=+22.313090071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936772 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936791 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: E0216 12:53:13.936918 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:14.936893722 +0000 UTC m=+22.313242633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:13 crc kubenswrapper[4740]: I0216 12:53:13.966661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.022210 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-v88dn"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.022583 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.023716 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mcb2z"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.024101 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.028887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029287 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029360 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029506 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029510 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.029619 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034601 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034551 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.034885 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035126 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.035096 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.037650 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.037985 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.037954504 +0000 UTC m=+22.414303245 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.038303 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.038327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.039608 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.039546389 +0000 UTC m=+22.415895110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039688 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039865 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.039926 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.039983 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.040003 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:15.039988633 +0000 UTC m=+22.416337554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040056 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040100 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040125 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040358 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040452 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040658 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040803 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040854 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040913 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.040990 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.041004 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.044039 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.051525 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.061889 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.071672 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.081378 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.092806 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.104945 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.125540 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.137841 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142208 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142314 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142361 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142375 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142425 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142533 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142549 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142686 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142768 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142823 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142865 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.142880 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cnibin\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143270 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-system-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-kubelet\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143314 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143754 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-bin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.143949 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144008 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-multus-certs\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144026 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144049 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144145 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-socket-dir-parent\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-var-lib-cni-multus\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144436 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-k8s-cni-cncf-io\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-conf-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144646 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-os-release\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144660 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-cni-dir\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144655 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-host-run-netns\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-cnibin\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144724 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-etc-kubernetes\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144750 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-system-cni-dir\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144771 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-os-release\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21f981d4-46dd-4bb5-b244-aaf603008c5e-hostroot\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.144829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-cni-binary-copy\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145280 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.145448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.146627 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21f981d4-46dd-4bb5-b244-aaf603008c5e-multus-daemon-config\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.147885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.148422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.157134 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 12:48:13 +0000 UTC, rotation deadline is 2026-11-13 14:33:07.786932992 +0000 UTC Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.157235 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6481h39m53.629701523s for next certificate rotation Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.163496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqh7\" (UniqueName: \"kubernetes.io/projected/21f981d4-46dd-4bb5-b244-aaf603008c5e-kube-api-access-8cqh7\") pod \"multus-v88dn\" (UID: \"21f981d4-46dd-4bb5-b244-aaf603008c5e\") " pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.164201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7pl\" (UniqueName: \"kubernetes.io/projected/ad2ec4df-11e9-4970-bd6b-c258ce2d08bb-kube-api-access-mc7pl\") pod \"multus-additional-cni-plugins-mcb2z\" (UID: \"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\") " pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.164791 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.167069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"ovnkube-node-msmgh\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.175980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.190099 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.207886 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.222261 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.230886 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:25:35.297890451 +0000 UTC Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.255805 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.272669 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.328188 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.349030 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.354356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v88dn" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.359499 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.372859 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.379866 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:14 crc kubenswrapper[4740]: W0216 12:53:14.387839 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad2ec4df_11e9_4970_bd6b_c258ce2d08bb.slice/crio-27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e WatchSource:0}: Error finding container 27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e: Status 404 returned error can't find the container with id 27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.429399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"d779e04aa061f91765ed436953c618c472ea0d9a00a456e348d56b2e0782ee5c"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433767 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.433845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a7d5dfb5688b2b93feafa71e8584c4346d7001e888896dca26263e6bc549ad14"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435652 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.435732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"a3287e85de51a52db07ef29dacd4547a2a71069bb40cbf280a5504aae50e5ab1"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.436714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ee23b2b7f4584c1af5f89795040d639dc9de01716edceb2e5a4d39f906f6163d"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.437841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"eb66f3d2b37f21fe7aa111a136026ce7eb2cec2307821fe7b198f1e6beb272ce"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.439583 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerStarted","Data":"27624147e5a1d1bd619b329f53692419eecc622090284c7a82a65faea5eaab0e"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.441415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttqrb" event={"ID":"42324c80-0f4d-4a2b-8374-fa2358bc8217","Type":"ContainerStarted","Data":"880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.441440 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttqrb" event={"ID":"42324c80-0f4d-4a2b-8374-fa2358bc8217","Type":"ContainerStarted","Data":"60057fc64fdaa9b8f385ea9918f07d39f190e1f2d3fbd52ac19d30448d547e62"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.442607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.442634 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e6007e017e68653eda1596ccc2546c8eeadfba98c27d4b9bb8182ab6b2d544ca"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.444231 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.444544 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447328 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" exitCode=255 Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878"} Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.447394 4740 scope.go:117] "RemoveContainer" containerID="d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.458695 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.481143 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.494177 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.505600 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.519086 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.531332 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.543090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.546349 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.546845 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.547084 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.556029 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.567237 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.597457 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.623880 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.662619 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.699993 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.740782 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.780904 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.825093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"message\\\":\\\"W0216 12:52:56.643264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 12:52:56.644080 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771246376 cert, and key in /tmp/serving-cert-2462593891/serving-signer.crt, /tmp/serving-cert-2462593891/serving-signer.key\\\\nI0216 12:52:57.084088 1 observer_polling.go:159] Starting file observer\\\\nW0216 12:52:57.086632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 12:52:57.086785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:52:57.088200 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2462593891/tls.crt::/tmp/serving-cert-2462593891/tls.key\\\\\\\"\\\\nF0216 12:52:57.304564 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.860787 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.902310 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.942698 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.950789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951136 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951210 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951174 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951234 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951269 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951287 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951312 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:16.951287466 +0000 UTC m=+24.327636377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.951209 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:14 crc kubenswrapper[4740]: E0216 12:53:14.951453 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:16.951429418 +0000 UTC m=+24.327778209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:14 crc kubenswrapper[4740]: I0216 12:53:14.980411 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:14Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.029375 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052147 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.052316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052417 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052415 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052470 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.05245682 +0000 UTC m=+24.428805541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052515 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.05249084 +0000 UTC m=+24.428839721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.052627 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:17.052614161 +0000 UTC m=+24.428963082 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.064314 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.231434 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:46:14.996936557 +0000 UTC Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.280953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281118 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.280972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.281327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281473 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.281539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.286119 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.286914 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.288068 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.288721 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.289689 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.290218 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.290888 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.291873 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.292641 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.293738 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.294350 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.295947 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.296490 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.297014 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.297907 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.298402 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.299369 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.299780 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.300322 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.301287 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.301742 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.302714 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.303181 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.304219 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.304666 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.305497 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.306695 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.307170 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.308402 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.309057 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.309961 4740 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.310061 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.311646 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.312648 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.313466 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.315206 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.316042 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.316992 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.317695 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.320016 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.320597 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.321618 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.322399 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.323446 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.323935 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.324798 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.325394 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.326796 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.327456 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.328471 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.328951 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.329989 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.330588 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.331055 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.452552 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb" exitCode=0 Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.452602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.454667 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187" exitCode=0 Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.454744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.456780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd"} Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.459260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.463262 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:15 crc kubenswrapper[4740]: E0216 12:53:15.463466 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.471213 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.490084 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.520902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b1f529da7a9c940cc1253241126d9c2faf83db16fbe3d86e4f4aa3beb008dd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"message\\\":\\\"W0216 12:52:56.643264 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 12:52:56.644080 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771246376 cert, and key in /tmp/serving-cert-2462593891/serving-signer.crt, /tmp/serving-cert-2462593891/serving-signer.key\\\\nI0216 12:52:57.084088 1 observer_polling.go:159] Starting file observer\\\\nW0216 12:52:57.086632 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 12:52:57.086785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:52:57.088200 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2462593891/tls.crt::/tmp/serving-cert-2462593891/tls.key\\\\\\\"\\\\nF0216 12:52:57.304564 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.567186 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.598027 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.622693 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.644838 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.672276 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.692720 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.708363 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.725909 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.742316 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.764054 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.787388 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.800920 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.821306 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.848850 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.866132 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.889885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.905329 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.909783 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7zs65"] Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.910205 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.912747 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.924018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:15Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.930486 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.952074 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.961976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.962008 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.962058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:15 crc kubenswrapper[4740]: I0216 12:53:15.976270 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.002738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.023100 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062885 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.062955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.063086 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-host\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.063888 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-serviceca\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.064903 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.094580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzq7l\" (UniqueName: \"kubernetes.io/projected/6020d2c6-e8f9-4ca7-b6c4-c219193a42e6-kube-api-access-tzq7l\") pod \"node-ca-7zs65\" (UID: \"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\") " pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.122336 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.158649 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.201885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.232319 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:16:22.194416461 +0000 UTC Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.240158 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.248188 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7zs65" Feb 16 12:53:16 crc kubenswrapper[4740]: W0216 12:53:16.278689 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6020d2c6_e8f9_4ca7_b6c4_c219193a42e6.slice/crio-b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f WatchSource:0}: Error finding container b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f: Status 404 returned error can't find the container with id b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.282889 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.320394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.361247 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.398866 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.442936 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.468637 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b" exitCode=0 Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.468710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476416 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476507 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.476518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.477695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7zs65" event={"ID":"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6","Type":"ContainerStarted","Data":"b8353901a81c0ac7ed51bd0d481a5c78af0b95112f4a0d5aa927c1f17bc1f75f"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.479933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b"} Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.480869 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.481156 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.482211 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.522645 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.560098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.603195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.641458 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.680306 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.721269 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.759999 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.800638 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.842085 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.885545 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.924351 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.968923 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.972145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:16 crc kubenswrapper[4740]: I0216 12:53:16.972237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972334 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972369 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972380 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972450 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972471 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972472 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:20.972520273 +0000 UTC m=+28.348869024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:16 crc kubenswrapper[4740]: E0216 12:53:16.972570 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:20.972558025 +0000 UTC m=+28.348906786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.011009 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.057892 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073374 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073598 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073716 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073735 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.073685714 +0000 UTC m=+28.450034475 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.073805 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.073780656 +0000 UTC m=+28.450129407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.073886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.074101 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.074171 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:21.074154808 +0000 UTC m=+28.450503569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.090575 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.129716 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.166440 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.232515 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:21:16.131042337 +0000 UTC Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.280968 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.281012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.280970 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281119 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:17 crc kubenswrapper[4740]: E0216 12:53:17.281391 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.488906 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7" exitCode=0 Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.489039 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7"} Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.492155 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7zs65" event={"ID":"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6","Type":"ContainerStarted","Data":"13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e"} Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.506959 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.534232 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.549545 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.570481 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.582355 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.602131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.613220 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.634138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.646935 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.658170 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.670264 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.690713 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.703018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.720280 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.760326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.797257 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.844351 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.895156 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.931098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:17 crc kubenswrapper[4740]: I0216 12:53:17.960726 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.001607 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.039867 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.078905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.125383 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.160440 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.201101 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.233360 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:42:38.557611797 +0000 UTC Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.498428 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a"} Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.500662 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16" exitCode=0 Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.500729 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16"} Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.520895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.533875 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.553424 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.566928 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.581381 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.592136 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.604402 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.618101 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.628625 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.643178 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.654672 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.678985 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.695406 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.700432 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.722840 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.738083 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.783406 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.820397 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.863979 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.906991 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.939708 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:18 crc kubenswrapper[4740]: I0216 12:53:18.980592 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.020735 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.063331 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.103155 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.144526 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.187766 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.232799 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.233868 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:06:14.463356873 +0000 UTC Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.264905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280385 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280467 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.280503 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.280666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.280796 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.281016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.509274 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96" exitCode=0 Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.509345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.516043 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.520356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.520451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.521152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.521639 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.534012 4740 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.534322 4740 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.538714 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.539340 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.559279 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.563704 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.566564 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.582947 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.587831 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588409 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.588424 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.607732 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.608551 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612881 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.612967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.628164 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.631285 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.635356 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.645008 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.652384 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: E0216 12:53:19.652665 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.656682 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.665523 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.679120 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.700001 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.716473 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.743409 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.759771 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.781558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.819865 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.858431 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.862373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:19 crc kubenswrapper[4740]: I0216 12:53:19.965310 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:19Z","lastTransitionTime":"2026-02-16T12:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.068767 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.172259 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.235146 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:38:21.962462766 +0000 UTC Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.275512 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379687 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.379746 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.482900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.483053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.483177 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.517753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.517969 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.518146 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.518299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.521124 4740 generic.go:334] "Generic (PLEG): container finished" podID="ad2ec4df-11e9-4970-bd6b-c258ce2d08bb" containerID="3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4" exitCode=0 Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.521189 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerDied","Data":"3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.540854 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.549965 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.551302 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.556285 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.574570 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.586303 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.589932 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.602535 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.614945 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.627555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.641987 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.654756 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.671072 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.686899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.689197 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.702535 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.715024 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.740626 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.755511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.772518 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.783090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.792269 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.799318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.812660 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.825825 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.838899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.851890 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.868342 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.884212 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.895159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:20Z","lastTransitionTime":"2026-02-16T12:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.902023 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.918204 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.939393 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:20 crc kubenswrapper[4740]: I0216 12:53:20.987394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.001965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.002050 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.017786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.017883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018025 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018028 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018077 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018099 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018045 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018165 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018172 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.018145607 +0000 UTC m=+36.394494358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.018219 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.018203648 +0000 UTC m=+36.394552539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105256 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.105269 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.118658 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.118945 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.119457 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119517193 +0000 UTC m=+36.495865954 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119649 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119610926 +0000 UTC m=+36.495959677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.119711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119926 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.119989 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.119975327 +0000 UTC m=+36.496324078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.208779 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.237133 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:45:24.065587729 +0000 UTC Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281232 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.281168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281343 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281417 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:21 crc kubenswrapper[4740]: E0216 12:53:21.281557 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312581 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.312648 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.415183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.518202 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.529094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" event={"ID":"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb","Type":"ContainerStarted","Data":"504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.544477 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.564584 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.576974 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.595469 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.614745 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.620898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.626468 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.641208 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.659237 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.670975 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.686731 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.710214 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724054 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.724610 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.738087 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.752616 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.827702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.828359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.930735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:21 crc kubenswrapper[4740]: I0216 12:53:21.931440 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:21Z","lastTransitionTime":"2026-02-16T12:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.033909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.033992 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034044 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.034062 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.137916 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.237768 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:25:03.53814184 +0000 UTC Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.245635 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349143 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.349163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.452274 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.556190 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.658837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761344 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.761374 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.864825 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:22 crc kubenswrapper[4740]: I0216 12:53:22.967322 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:22Z","lastTransitionTime":"2026-02-16T12:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.069359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.073648 4740 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.171710 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.238120 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:11:12.750922948 +0000 UTC Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.273704 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280487 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.280596 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.280724 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.281056 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:23 crc kubenswrapper[4740]: E0216 12:53:23.281117 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.295248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.306685 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.319452 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.331804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.349650 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.362084 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.376089 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.377331 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.391343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.405493 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.417689 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.446227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.480628 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.499439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.512249 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.537197 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/0.log" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.539781 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" exitCode=1 Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.539847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.540523 4740 scope.go:117] "RemoveContainer" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.552894 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.565513 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.575844 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.582784 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.587288 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.600955 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.612752 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.628149 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.638434 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.661911 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.677864 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.685147 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.698544 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.716117 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.727619 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.743564 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787632 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.787669 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.889708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.890276 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:23 crc kubenswrapper[4740]: I0216 12:53:23.993927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:23Z","lastTransitionTime":"2026-02-16T12:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.096466 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.199465 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.239297 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:56:04.139519246 +0000 UTC Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301958 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.301991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.302005 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403367 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.403455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.505708 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.547806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.548633 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/0.log" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551699 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" exitCode=1 Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551752 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.551869 4740 scope.go:117] "RemoveContainer" containerID="c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.552759 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:24 crc kubenswrapper[4740]: E0216 12:53:24.553005 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.576861 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.600158 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.609479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.622183 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.658804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.673691 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.687308 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.699364 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.712301 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.715404 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.728115 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.740699 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.756908 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.769714 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.783385 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.800922 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.815205 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:24 crc kubenswrapper[4740]: I0216 12:53:24.917945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:24Z","lastTransitionTime":"2026-02-16T12:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.020965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.021078 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.124124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.226585 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.239483 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:14:23.636942584 +0000 UTC Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.281139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.281320 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.281526 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.282013 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.282187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:25 crc kubenswrapper[4740]: E0216 12:53:25.282275 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.329908 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.433685 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.536904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.537165 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.556648 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.640921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.641004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.641025 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.744642 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.848605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:25 crc kubenswrapper[4740]: I0216 12:53:25.951617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:25Z","lastTransitionTime":"2026-02-16T12:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.054428 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.157706 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.240382 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:41:05.091903085 +0000 UTC Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.260711 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.363449 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466927 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.466974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.557860 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn"] Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.558640 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.561083 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.561870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.570587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.586226 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.605673 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.639287 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.661374 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.672986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.673121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684188 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684332 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.684636 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.711742 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.725897 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.744944 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.762318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.775328 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.778736 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786536 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.786652 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.787379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.787381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.792850 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.799459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/872ae2f5-5967-4ebe-b05f-148a0f7402f7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.807410 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.811253 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdl5\" (UniqueName: \"kubernetes.io/projected/872ae2f5-5967-4ebe-b05f-148a0f7402f7-kube-api-access-pzdl5\") pod \"ovnkube-control-plane-749d76644c-grlzn\" (UID: \"872ae2f5-5967-4ebe-b05f-148a0f7402f7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.818773 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.836138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.855396 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.877935 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.880991 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" Feb 16 12:53:26 crc kubenswrapper[4740]: W0216 12:53:26.901003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872ae2f5_5967_4ebe_b05f_148a0f7402f7.slice/crio-60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd WatchSource:0}: Error finding container 60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd: Status 404 returned error can't find the container with id 60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.980982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:26 crc kubenswrapper[4740]: I0216 12:53:26.981110 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:26Z","lastTransitionTime":"2026-02-16T12:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.085947 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.189529 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.241293 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:33:24.991130406 +0000 UTC Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281138 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281249 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.281308 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281379 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.281723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.291648 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.393623 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.496414 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.571562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.571940 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.572116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" event={"ID":"872ae2f5-5967-4ebe-b05f-148a0f7402f7","Type":"ContainerStarted","Data":"60108bed7ed3e06417012c7bd81e1b01b4c6607bd97e3eeae322a6d5c835affd"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.585413 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.599319 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.600445 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.614852 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.625711 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.642758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.658577 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.675189 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.690104 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.701647 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.706300 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.706740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.706802 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.707174 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.724265 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.744534 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.758318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.779004 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.796921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.797000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.798391 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.804869 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.809366 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.825227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.837057 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.849047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.860412 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.872756 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.887161 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.898256 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.898341 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:27 crc kubenswrapper[4740]: E0216 12:53:27.898438 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:28.39842161 +0000 UTC m=+35.774770341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.906892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:27Z","lastTransitionTime":"2026-02-16T12:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.913874 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.921628 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cphq\" (UniqueName: \"kubernetes.io/projected/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-kube-api-access-5cphq\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.929029 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.941527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.954718 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.969302 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:27 crc kubenswrapper[4740]: I0216 12:53:27.989104 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.002100 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.009089 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.033608 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.050593 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.111317 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.213441 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.242018 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:51:25.362754336 +0000 UTC Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.316163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.402468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:28 crc kubenswrapper[4740]: E0216 12:53:28.402747 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:28 crc kubenswrapper[4740]: E0216 12:53:28.402902 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:29.40287008 +0000 UTC m=+36.779218831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.418995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.419121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.521777 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.624213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.726928 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.829419 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:28 crc kubenswrapper[4740]: I0216 12:53:28.932993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:28Z","lastTransitionTime":"2026-02-16T12:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.035999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.036018 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.112678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.112741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112911 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112929 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112939 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.112991 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.112977749 +0000 UTC m=+52.489326470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113063 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113118 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113146 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.113242 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.113209825 +0000 UTC m=+52.489558696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.139109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.213626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213679 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213712 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213672605 +0000 UTC m=+52.590021356 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213771 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213746327 +0000 UTC m=+52.590095078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213776 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.213884 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:53:45.213858619 +0000 UTC m=+52.590207530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.241863 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.242413 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:09:23.723904216 +0000 UTC Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280530 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280580 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.280584 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.280883 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281067 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281285 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.281456 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.282017 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345870 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.345894 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.416666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.416890 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.416978 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:31.416960763 +0000 UTC m=+38.793309504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.448915 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.551450 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.586010 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.588990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.589908 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.607853 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.625250 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.643315 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.654248 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.662148 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.677839 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.695610 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.713161 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.729139 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.746384 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.756704 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.767437 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.788189 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.799402 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.803679 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.818132 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.818624 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.821892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.832377 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.837635 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.841577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.847191 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.854855 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.857979 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.869250 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.876705 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.880984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.881001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.881010 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.893840 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:29 crc kubenswrapper[4740]: E0216 12:53:29.894002 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.896081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:29 crc kubenswrapper[4740]: I0216 12:53:29.999314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:29Z","lastTransitionTime":"2026-02-16T12:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102368 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.102421 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.205419 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.243320 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:15:49.796028849 +0000 UTC Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.308676 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.411994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.412021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.412039 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.520852 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.623619 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.726837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.829948 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:30 crc kubenswrapper[4740]: I0216 12:53:30.932631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:30Z","lastTransitionTime":"2026-02-16T12:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.035184 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.138855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.241995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.244490 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:38:15.147473105 +0000 UTC Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280348 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280432 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280501 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280525 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280605 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280720 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.280729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.280871 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.344887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.435719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.435959 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:31 crc kubenswrapper[4740]: E0216 12:53:31.436081 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:35.43605377 +0000 UTC m=+42.812402701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.447972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.448077 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.551955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.552216 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655641 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.655681 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758841 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.758853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861632 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.861732 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:31 crc kubenswrapper[4740]: I0216 12:53:31.964454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:31Z","lastTransitionTime":"2026-02-16T12:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.067325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.170631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.244875 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:28:24.717493896 +0000 UTC Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.272987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.273116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.377146 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.479878 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.583097 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.691403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.793804 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.897961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.898340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:32 crc kubenswrapper[4740]: I0216 12:53:32.898742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:32Z","lastTransitionTime":"2026-02-16T12:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.001779 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.104403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.207867 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.245630 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:49:02.97468691 +0000 UTC Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.280899 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281047 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281142 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281400 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281479 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.281509 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281555 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:33 crc kubenswrapper[4740]: E0216 12:53:33.281604 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.298234 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.310903 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.313969 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.340919 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6e0484517152b9c054e47e36bc8fa7c2776344cc6f4643faa81e4effbd6c1f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:23Z\\\",\\\"message\\\":\\\" 6053 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:23.287863 6053 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:23.287878 6053 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:23.287916 6053 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:23.287967 6053 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 12:53:23.287977 6053 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 12:53:23.287970 6053 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:23.287996 6053 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 12:53:23.288002 6053 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:23.288010 6053 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 12:53:23.288014 6053 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 12:53:23.288024 6053 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 12:53:23.288025 6053 factory.go:656] Stopping watch factory\\\\nI0216 12:53:23.288019 6053 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:23.288046 6053 ovnkube.go:599] Stopped ovnkube\\\\nI0216 12:53:23.288045 6053 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 12:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.359348 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.378378 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.396895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.412964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413226 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.413573 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.424895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.446197 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.461077 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.477665 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.489350 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.503696 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515337 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.515418 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.533457 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.547207 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.617940 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.720978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.824507 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:33 crc kubenswrapper[4740]: I0216 12:53:33.928196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:33Z","lastTransitionTime":"2026-02-16T12:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.030243 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.133183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.235871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.235999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.236058 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.246638 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:51:38.880122209 +0000 UTC Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.339902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.340124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.340293 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.443939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.546838 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.649984 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.753127 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.856186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:34 crc kubenswrapper[4740]: I0216 12:53:34.959359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:34Z","lastTransitionTime":"2026-02-16T12:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.062103 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.165136 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.247607 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:19:14.66054104 +0000 UTC Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.268567 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.280508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.280731 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.280986 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.281050 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.281230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.371990 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.475241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.479905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.480085 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:35 crc kubenswrapper[4740]: E0216 12:53:35.480202 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:43.480169029 +0000 UTC m=+50.856517790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578183 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.578265 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.681710 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.784333 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.887432 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:35 crc kubenswrapper[4740]: I0216 12:53:35.990297 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:35Z","lastTransitionTime":"2026-02-16T12:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.093311 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.197479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.248488 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:47:47.645406077 +0000 UTC Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300204 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.300298 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.402882 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.506575 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.609841 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.712986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.713001 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.816768 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:36 crc kubenswrapper[4740]: I0216 12:53:36.918995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:36Z","lastTransitionTime":"2026-02-16T12:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.022914 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.125994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.126068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.126124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.229273 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.248675 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:34:35.322978762 +0000 UTC Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281185 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281266 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281303 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281451 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281601 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.281097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:37 crc kubenswrapper[4740]: E0216 12:53:37.281836 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.331906 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.434605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.537439 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.638978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.639046 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.742146 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.845703 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:37 crc kubenswrapper[4740]: I0216 12:53:37.949173 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:37Z","lastTransitionTime":"2026-02-16T12:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052728 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.052774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.156404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.249867 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:38:19.860712453 +0000 UTC Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.259942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260010 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.260114 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.363445 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.467486 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.570925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675243 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.675318 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.785975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.786168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.889276 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:38 crc kubenswrapper[4740]: I0216 12:53:38.992298 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:38Z","lastTransitionTime":"2026-02-16T12:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.095634 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.198731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.250057 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:34:53.081519998 +0000 UTC Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281043 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281115 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281432 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.281448 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281585 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281685 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:39 crc kubenswrapper[4740]: E0216 12:53:39.281730 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.302241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.405984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.406009 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.508943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.509092 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.611982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.612096 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.715695 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.819925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.922645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:39 crc kubenswrapper[4740]: I0216 12:53:39.923520 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:39Z","lastTransitionTime":"2026-02-16T12:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026481 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.026635 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.129989 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.130010 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.180652 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.200496 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.205736 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.227870 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.234207 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.250559 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:57:26.951318932 +0000 UTC Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.254957 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.260128 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.261939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.280275 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.281367 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.287482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.311877 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.311945 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: E0216 12:53:40.312283 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.317870 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.340956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.354925 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.376836 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.391998 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.406837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.419551 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.421533 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.437970 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.459060 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.474459 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.503743 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.522438 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.523904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.539049 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.550130 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.561951 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.575077 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.626409 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.627192 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.630512 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.630856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.644710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.665036 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.685458 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.702198 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.720113 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.728702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.736445 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.760634 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.776349 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.794369 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.805749 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.816473 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.827848 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.830968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.831084 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.838308 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.848423 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.859978 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.872131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:40 crc kubenswrapper[4740]: I0216 12:53:40.933595 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:40Z","lastTransitionTime":"2026-02-16T12:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036189 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.036206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.138756 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.245538 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.250846 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:35:24.557149349 +0000 UTC Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281155 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281262 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.281338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281458 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281587 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.281735 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.349588 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.453953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.454016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.454036 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.558236 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.637535 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.638458 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/1.log" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642756 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" exitCode=1 Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642860 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.642946 4740 scope.go:117] "RemoveContainer" containerID="2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.644169 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:41 crc kubenswrapper[4740]: E0216 12:53:41.644523 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.661506 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.665389 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.679702 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.692427 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.710837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.727851 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.748221 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.763275 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.765578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.778886 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.802141 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.818757 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.839671 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.859047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.868640 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.874414 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.901731 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2912e5672a03591bdb4b57d12b9f36ca0bbdc7ff06e9cf189b337403074d616f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:24Z\\\",\\\"message\\\":\\\"6 12:53:24.386574 6179 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386767 6179 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 12:53:24.386792 6179 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 12:53:24.386858 6179 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 12:53:24.386876 6179 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 12:53:24.386953 6179 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 12:53:24.386967 6179 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 12:53:24.387009 6179 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 12:53:24.387246 6179 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 12:53:24.387265 6179 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 12:53:24.387277 6179 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 12:53:24.387290 6179 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 12:53:24.387475 6179 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.915490 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.926780 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.970978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:41 crc kubenswrapper[4740]: I0216 12:53:41.971139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:41Z","lastTransitionTime":"2026-02-16T12:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.074149 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.177609 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.251053 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:08:08.637867404 +0000 UTC Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.281455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.384362 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487841 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.487958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.591847 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.656677 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.663785 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:42 crc kubenswrapper[4740]: E0216 12:53:42.664074 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.683863 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.694765 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.700879 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.721479 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.736516 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.753804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.769760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.787950 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.797766 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.802439 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.816285 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.828729 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.845004 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.864629 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.881913 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.898683 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.899973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.900054 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:42Z","lastTransitionTime":"2026-02-16T12:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.915705 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:42 crc kubenswrapper[4740]: I0216 12:53:42.938128 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:42Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.002562 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.105785 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.209155 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.251487 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:00:02.017860521 +0000 UTC Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280696 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280786 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.280924 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281689 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281802 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.281898 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.299278 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.311219 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.312376 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.331738 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.347425 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.363885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.377456 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.390278 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.407382 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.416120 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.422917 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.440416 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.453586 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.465875 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.480317 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.495262 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.511792 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.518773 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.525391 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.573198 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.573473 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:43 crc kubenswrapper[4740]: E0216 12:53:43.573642 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:53:59.573601311 +0000 UTC m=+66.949950202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.621119 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723459 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723558 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.723572 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.827716 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.930746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:43 crc kubenswrapper[4740]: I0216 12:53:43.931462 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:43Z","lastTransitionTime":"2026-02-16T12:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.035251 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.138941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.139051 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242503 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.242580 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.252002 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:59:17.281423474 +0000 UTC Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.345672 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.449753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.450677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553773 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.553853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.657686 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.761911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.762020 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.866159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:44 crc kubenswrapper[4740]: I0216 12:53:44.969927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:44Z","lastTransitionTime":"2026-02-16T12:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.079968 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.183743 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.192637 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.192796 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193002 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193028 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193050 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193098 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193055 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193180 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.193153949 +0000 UTC m=+84.569502710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193183 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.193279 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.193260273 +0000 UTC m=+84.569609034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.253176 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:12:17.749613307 +0000 UTC Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.280795 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.281016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.281699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282031 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.282364 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.287443 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.293951 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294157 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294120194 +0000 UTC m=+84.670468915 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.294298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.294360 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294463 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294575 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294601 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294560136 +0000 UTC m=+84.670909057 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: E0216 12:53:45.294658 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:54:17.294648668 +0000 UTC m=+84.670997389 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.391289 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.494976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.495100 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.598199 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.701931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.702621 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.806286 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.856401 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.868277 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.877056 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.889333 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.906703 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.909241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:45Z","lastTransitionTime":"2026-02-16T12:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.921986 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.939686 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.960248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.978762 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:45 crc kubenswrapper[4740]: I0216 12:53:45.997529 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.010996 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.011011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.011024 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.012587 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.033995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.052851 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.081262 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.097148 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.114497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.118088 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.136792 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.142584 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.151550 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.167576 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.181039 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.194140 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.204773 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.216957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.217070 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.219935 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.238733 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.250899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.253505 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:24:59.501924729 +0000 UTC Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.266362 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.281252 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.300795 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.317754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.319121 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.333326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.347450 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.380138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.399093 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.413789 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.421194 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.428478 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:46Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.523171 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626380 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.626459 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.729776 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.832987 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935572 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:46 crc kubenswrapper[4740]: I0216 12:53:46.935594 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:46Z","lastTransitionTime":"2026-02-16T12:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039214 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.039270 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.142860 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.245987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.246003 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.254345 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:37:13.079287836 +0000 UTC Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280862 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281014 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.280935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281233 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.281382 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:47 crc kubenswrapper[4740]: E0216 12:53:47.281554 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.349906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.350805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.453718 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.556522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.659148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.761643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.761896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.762175 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.865578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:47 crc kubenswrapper[4740]: I0216 12:53:47.968805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:47Z","lastTransitionTime":"2026-02-16T12:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.071932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.071997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.072061 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.174898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.254685 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:23:17.212854883 +0000 UTC Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.277985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.278004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.380668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.380990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.381250 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.484957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.485073 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.588457 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.690349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.792756 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.895869 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:48 crc kubenswrapper[4740]: I0216 12:53:48.998654 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:48Z","lastTransitionTime":"2026-02-16T12:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.101934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.101997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.102074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.204696 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.255166 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:52:47.267252694 +0000 UTC Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280647 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280719 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.280666 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280839 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280959 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.280991 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:49 crc kubenswrapper[4740]: E0216 12:53:49.281051 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.307830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.410912 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.513447 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.616151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.719532 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822368 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.822455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925503 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:49 crc kubenswrapper[4740]: I0216 12:53:49.925514 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:49Z","lastTransitionTime":"2026-02-16T12:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.028291 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.131427 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.234236 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.255704 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:07:21.128819355 +0000 UTC Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.337904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.441919 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.549979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.550016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.550080 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.653580 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.682348 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.699310 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.704433 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.719230 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.723782 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.746089 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.751930 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.763669 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.768366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.781701 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:50 crc kubenswrapper[4740]: E0216 12:53:50.781889 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.783734 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.885919 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:50 crc kubenswrapper[4740]: I0216 12:53:50.988278 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:50Z","lastTransitionTime":"2026-02-16T12:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.090485 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.193610 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.256263 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:32:46.293495405 +0000 UTC Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280532 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.280597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280758 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.280972 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:51 crc kubenswrapper[4740]: E0216 12:53:51.281170 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.295373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.398213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.501442 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.603978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.707489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.811200 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.914968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915664 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:51 crc kubenswrapper[4740]: I0216 12:53:51.915803 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:51Z","lastTransitionTime":"2026-02-16T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.018956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.019118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.123557 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.226893 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.256975 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:21:01.54957382 +0000 UTC Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329939 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.329956 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.433932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.434111 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.537306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.537767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.538349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.640967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.744968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.745098 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.848997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.849013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:52 crc kubenswrapper[4740]: I0216 12:53:52.952958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:52Z","lastTransitionTime":"2026-02-16T12:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.056203 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.159190 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.257547 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:05:40.80437878 +0000 UTC Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.261769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.262895 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.282417 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.282539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.282594 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.282644 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.284765 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.284999 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.285205 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:53 crc kubenswrapper[4740]: E0216 12:53:53.285444 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.301268 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.317513 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.334110 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.349027 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.365709 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.367476 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.384956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.396723 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.409558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.423352 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.435389 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.449994 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.462978 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.471971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.472067 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.477460 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.493021 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.520703 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.541719 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.557200 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.575575 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.679151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.782731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.885990 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988846 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:53 crc kubenswrapper[4740]: I0216 12:53:53.988961 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:53Z","lastTransitionTime":"2026-02-16T12:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.091694 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.194979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.195068 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.259520 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:15:25.23839821 +0000 UTC Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.297464 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.400550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.506563 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.609988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.610013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.713952 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.817107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:54 crc kubenswrapper[4740]: I0216 12:53:54.920708 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:54Z","lastTransitionTime":"2026-02-16T12:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.023993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.024098 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.127588 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.231425 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.260125 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:36:52.516908699 +0000 UTC Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280792 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.280836 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281307 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.281572 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281581 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281717 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:55 crc kubenswrapper[4740]: E0216 12:53:55.281742 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.334597 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.438221 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.540926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.541457 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.645375 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749570 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.749617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.853531 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956202 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:55 crc kubenswrapper[4740]: I0216 12:53:55.956237 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:55Z","lastTransitionTime":"2026-02-16T12:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.059574 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.162135 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.260507 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:11:13.241819746 +0000 UTC Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.264798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.367355 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.469166 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.572942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.573186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.573398 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.676366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.779684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.780472 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.883961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.884316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.884353 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:56 crc kubenswrapper[4740]: I0216 12:53:56.987636 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:56Z","lastTransitionTime":"2026-02-16T12:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.090963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.091646 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.194181 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.261501 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:05:38.66361818 +0000 UTC Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.281026 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.281049 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281223 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281319 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:57 crc kubenswrapper[4740]: E0216 12:53:57.281403 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297045 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.297109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.399917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.399985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.400038 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.506825 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.608697 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.712927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.817496 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:57 crc kubenswrapper[4740]: I0216 12:53:57.920397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:57Z","lastTransitionTime":"2026-02-16T12:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.023489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.125980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.228388 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.264474 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:43:14.598330917 +0000 UTC Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.330460 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.433152 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.535974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.638746 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740799 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.740898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.842486 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:58 crc kubenswrapper[4740]: I0216 12:53:58.944741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:58Z","lastTransitionTime":"2026-02-16T12:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.047884 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.150617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.253251 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.265111 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:31:49.411002243 +0000 UTC Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280514 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.280522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.280677 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.280971 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.281076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.281139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.281227 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.355485 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.458489 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.560798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.662130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.662507 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:59 crc kubenswrapper[4740]: E0216 12:53:59.662626 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:54:31.662601661 +0000 UTC m=+99.038950412 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.663390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.765461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.868995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.869014 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:53:59 crc kubenswrapper[4740]: I0216 12:53:59.971300 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:53:59Z","lastTransitionTime":"2026-02-16T12:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073856 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.073904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.176911 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.266142 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:23:55.642912272 +0000 UTC Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.279995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.383235 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.489769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.490552 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.593854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.697289 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.788366 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.801874 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.805996 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.820232 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.823956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.824066 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.835261 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.838650 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.851111 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854359 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.854399 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.865205 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:00 crc kubenswrapper[4740]: E0216 12:54:00.865374 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.866975 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973856 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:00 crc kubenswrapper[4740]: I0216 12:54:00.973924 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:00Z","lastTransitionTime":"2026-02-16T12:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.076340 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.178521 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.267872 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:09:37.493701857 +0000 UTC Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280443 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.280566 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.280751 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.280771 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.281124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.281434 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:01 crc kubenswrapper[4740]: E0216 12:54:01.281525 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.383532 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.485971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.486035 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.590148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.692740 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727244 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727341 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" exitCode=1 Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.727393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.728181 4740 scope.go:117] "RemoveContainer" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.749980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.767160 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.781640 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.793714 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.794977 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.810518 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.822533 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.841218 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.853900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.866870 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.877785 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.888292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.897330 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:01Z","lastTransitionTime":"2026-02-16T12:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.899774 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.910857 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.920223 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.932960 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.945751 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:01 crc kubenswrapper[4740]: I0216 12:54:01.961318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.001466 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.104438 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207352 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.207395 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.268216 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:21:42.279492843 +0000 UTC Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.309827 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.412979 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.515153 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.618128 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.721148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.737200 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.737268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.752343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.761338 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.770654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.781758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.796783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.807730 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.817515 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.823122 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.830521 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.841260 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.856346 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.874258 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.885639 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.895396 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.912684 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.926367 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:02Z","lastTransitionTime":"2026-02-16T12:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.927423 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.947913 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:02 crc kubenswrapper[4740]: I0216 12:54:02.958826 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.028151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.130400 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.233381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.269240 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:41:14.302600968 +0000 UTC Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280235 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280322 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280464 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280547 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280609 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.280868 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:03 crc kubenswrapper[4740]: E0216 12:54:03.280932 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.295650 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.305055 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.325649 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.335874 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.338374 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.353552 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.372953 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.385527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.397526 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.407900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.418255 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.426914 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.438268 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.443573 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.453113 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.467283 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.481339 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.495305 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.506982 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540540 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.540550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.643973 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.746568 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.849304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951256 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:03 crc kubenswrapper[4740]: I0216 12:54:03.951268 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:03Z","lastTransitionTime":"2026-02-16T12:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.053416 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.155383 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.259741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.270353 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:08:09.242498106 +0000 UTC Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.361705 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.463986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.464001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.464011 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.566885 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.669618 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.771592 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.874981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.875004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.875018 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977455 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:04 crc kubenswrapper[4740]: I0216 12:54:04.977482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:04Z","lastTransitionTime":"2026-02-16T12:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.080152 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.183137 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.271356 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:25:31.98894869 +0000 UTC Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.280967 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.280998 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.281025 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.281048 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281082 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281158 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:05 crc kubenswrapper[4740]: E0216 12:54:05.281324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.285658 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.387967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.489792 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.592522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.694993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.798094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:05 crc kubenswrapper[4740]: I0216 12:54:05.900398 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:05Z","lastTransitionTime":"2026-02-16T12:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003260 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.003314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105349 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.105381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207630 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.207677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.272169 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:43:58.784028153 +0000 UTC Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.309731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.412211 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.514624 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.617494 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.719974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.720087 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.823306 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:06 crc kubenswrapper[4740]: I0216 12:54:06.926633 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:06Z","lastTransitionTime":"2026-02-16T12:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.028798 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.131823 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.234399 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.272873 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:43:36.138443695 +0000 UTC Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280373 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280628 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280707 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.280879 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.280941 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.281181 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:07 crc kubenswrapper[4740]: E0216 12:54:07.281251 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.338562 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.441175 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.543107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646347 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.646378 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.749945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.853995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956699 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:07 crc kubenswrapper[4740]: I0216 12:54:07.956750 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:07Z","lastTransitionTime":"2026-02-16T12:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.059631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.162797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.264982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.265008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.265022 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.273963 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:40:44.586972497 +0000 UTC Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.281667 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.370991 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.473855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.576915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.577074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.679329 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.760926 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.764676 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.765270 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.781343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.782324 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.800907 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.820905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.837164 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.859980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.874772 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.884476 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.887943 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.899465 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.913880 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.941318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.955367 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.965980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.977585 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.986963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.987047 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:08Z","lastTransitionTime":"2026-02-16T12:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:08 crc kubenswrapper[4740]: I0216 12:54:08.990682 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.001768 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:08Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.012365 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.029502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.089827 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192421 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.192452 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.274486 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:37:19.396511715 +0000 UTC Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280835 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280888 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280888 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.280971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.280963 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281094 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281166 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.281585 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294572 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.294581 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.397149 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.499397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.602426 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.705627 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.773235 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.774541 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/2.log" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.780860 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" exitCode=1 Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.780968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.781067 4740 scope.go:117] "RemoveContainer" containerID="ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.784265 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:09 crc kubenswrapper[4740]: E0216 12:54:09.785528 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.809945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.811902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.833979 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.856632 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.877047 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.912778 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:09Z","lastTransitionTime":"2026-02-16T12:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.913160 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2fbf4fecf17886194dcc7ef2fc03d0142e9c289f5d24b207c14b591b65f37e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:53:41Z\\\",\\\"message\\\":\\\"GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 12:53:41.122889 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122886 6421 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-tcfzx] creating logical port openshift-multus_network-metrics-daemon-tcfzx for pod on switch crc\\\\nI0216 12:53:41.122920 6421 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:53:41.122922 6421 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-ttqrb\\\\nI0216 12:53:41.122939 6421 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0216 12:53:41.122944 6421 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.935201 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.950782 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.961739 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.972577 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.981919 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:09 crc kubenswrapper[4740]: I0216 12:54:09.992234 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:09Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.003149 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.015477 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.018573 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.041827 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.053502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.064328 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.081279 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.118548 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.221116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.275390 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:05:38.488798867 +0000 UTC Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.323854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.426938 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529401 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.529443 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.633370 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.736546 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.790965 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.794890 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:10 crc kubenswrapper[4740]: E0216 12:54:10.795189 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.807597 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.821098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.835901 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.841800 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.849256 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.863417 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.876584 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.890687 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.904053 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.923527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.932415 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.943710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.944944 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:10Z","lastTransitionTime":"2026-02-16T12:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.955902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.971490 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:10 crc kubenswrapper[4740]: I0216 12:54:10.983082 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:10Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.004997 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.009575 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.018309 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022314 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.022303 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.033365 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.034262 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.037589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.050629 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.055639 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.069965 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.074488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.087117 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:11Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.087273 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088462 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.088518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.191196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.276549 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:11:45.736454643 +0000 UTC Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.280970 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281032 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.281291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281450 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281622 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281727 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:11 crc kubenswrapper[4740]: E0216 12:54:11.281926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.295778 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.298195 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398470 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.398483 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.501632 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.604774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.707934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.811488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:11 crc kubenswrapper[4740]: I0216 12:54:11.915853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:11Z","lastTransitionTime":"2026-02-16T12:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.019359 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.123379 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.226673 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.277367 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:40:26.665128929 +0000 UTC Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.329785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.330518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.433315 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.537255 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.640375 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743193 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.743202 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.845701 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:12 crc kubenswrapper[4740]: I0216 12:54:12.949774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:12Z","lastTransitionTime":"2026-02-16T12:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.052949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.053090 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155773 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.155814 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.258926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.259055 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.277770 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:12:03.057101645 +0000 UTC Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280247 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280257 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280337 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.280527 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280549 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280641 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:13 crc kubenswrapper[4740]: E0216 12:54:13.280720 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.293608 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.305883 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.319836 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.331655 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.343671 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.360508 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.362135 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.377727 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.393760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.414126 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.428415 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.439884 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.457639 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.462667 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.469080 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.478662 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071403d5-fba4-44ab-a7f4-639b19b7dfe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.493062 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.506018 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.517681 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.528648 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.565322 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.668680 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.771188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.873965 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:13 crc kubenswrapper[4740]: I0216 12:54:13.976571 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:13Z","lastTransitionTime":"2026-02-16T12:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.079391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.080173 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.183948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.184074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.278648 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:27:31.237334716 +0000 UTC Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.287797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.390284 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.492985 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594706 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.594745 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699575 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.699608 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.803213 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:14 crc kubenswrapper[4740]: I0216 12:54:14.906887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:14Z","lastTransitionTime":"2026-02-16T12:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.009837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113267 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.113295 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.219977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.220126 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.279712 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:21:25.404567996 +0000 UTC Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281443 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281592 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281745 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.281780 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.281960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:15 crc kubenswrapper[4740]: E0216 12:54:15.282059 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.323392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.426699 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.530151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633381 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.633487 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.736667 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.839492 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:15 crc kubenswrapper[4740]: I0216 12:54:15.941625 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:15Z","lastTransitionTime":"2026-02-16T12:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044853 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.044865 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.147762 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.250966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.251015 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.251035 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.280900 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:17:09.714914109 +0000 UTC Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.353705 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.455998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.456026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.456045 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558810 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.558980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661361 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.661437 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763351 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.763361 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.865592 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.866397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.968993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:16 crc kubenswrapper[4740]: I0216 12:54:16.969168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:16Z","lastTransitionTime":"2026-02-16T12:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072881 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.072939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.175990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.176058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.176118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.260177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.260661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.260600 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261109 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261210 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.260813 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261316 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261327 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261371 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.261356414 +0000 UTC m=+148.637705135 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.261498 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.261484618 +0000 UTC m=+148.637833349 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.279349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280362 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.280889 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.280701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280981 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:03:52.396661662 +0000 UTC Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.280400 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.281498 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.281608 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.295190 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.362679 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.362638707 +0000 UTC m=+148.738987458 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.362947 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363034 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363138 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.363098401 +0000 UTC m=+148.739447172 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363157 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: E0216 12:54:17.363226 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.363206144 +0000 UTC m=+148.739554905 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.383326 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486441 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486458 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.486497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.589978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.692902 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.795802 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:17 crc kubenswrapper[4740]: I0216 12:54:17.899527 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:17Z","lastTransitionTime":"2026-02-16T12:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003328 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.003488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.107506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.107912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.108417 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.210813 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.281792 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:52:27.768675771 +0000 UTC Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.313900 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.416099 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.519455 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.622577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.726199 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.829401 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:18 crc kubenswrapper[4740]: I0216 12:54:18.931716 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:18Z","lastTransitionTime":"2026-02-16T12:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.034670 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.137335 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.240333 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281199 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281255 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281199 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281332 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.281348 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281402 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281489 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:19 crc kubenswrapper[4740]: E0216 12:54:19.281539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.282204 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:44:29.284677875 +0000 UTC Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.343802 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.447628 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.550932 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.654334 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.757391 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.860714 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:19 crc kubenswrapper[4740]: I0216 12:54:19.964355 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:19Z","lastTransitionTime":"2026-02-16T12:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.066803 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.169948 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.273473 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.282735 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:20:46.830230805 +0000 UTC Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.375993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.376132 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479270 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.479325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.582102 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.684990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.685290 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.788304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.890896 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:20 crc kubenswrapper[4740]: I0216 12:54:20.993934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:20Z","lastTransitionTime":"2026-02-16T12:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.096095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.142192 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.156787 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.162145 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.176784 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.181381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.197350 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.201964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.202067 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.224106 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.228568 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.239649 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:21Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.239770 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.241304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280589 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280674 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280720 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.280953 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.280990 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281123 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281228 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:21 crc kubenswrapper[4740]: E0216 12:54:21.281284 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.282916 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:29:29.409245712 +0000 UTC Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.343855 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.446753 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.549461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.651998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.652159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.754652 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.857454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:21 crc kubenswrapper[4740]: I0216 12:54:21.960986 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:21Z","lastTransitionTime":"2026-02-16T12:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.064786 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.167569 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.269994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.270007 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.283561 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:00:12.395796084 +0000 UTC Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.372431 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.474993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.475005 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.577651 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.680617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.783338 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.885677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:22 crc kubenswrapper[4740]: I0216 12:54:22.988988 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:22Z","lastTransitionTime":"2026-02-16T12:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.091991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.092018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.092037 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.194950 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281045 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281113 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281215 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281384 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.281408 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281452 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281571 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:23 crc kubenswrapper[4740]: E0216 12:54:23.281672 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.284694 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:22:36.977366539 +0000 UTC Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.296394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttqrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42324c80-0f4d-4a2b-8374-fa2358bc8217\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880d44a5651eb29abc04aa8be731a594e4418cbdc83c82b7c3e2e74902566a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxnxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttqrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297355 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.297394 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.310984 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"824399c3-585c-4a91-898a-eff8bafdf0e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b768790a7ce7f2078372a51a1443622a5f8caeeb5f8f2165931d5e08509f88bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30695117843415909c595c7d29fc3cac4d7f5d3688f0d5bc7f62d4aea09ee91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316408f9577228831fe8b6a21b79c46d914f2c2bccce87a6f407b307df19c4cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.323694 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.339555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.356014 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600888b0cf29568a3442a3b849d0fd213cadf5eba910787ecd562e3c0348274b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.379502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v88dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21f981d4-46dd-4bb5-b244-aaf603008c5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:01Z\\\",\\\"message\\\":\\\"2026-02-16T12:53:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623\\\\n2026-02-16T12:53:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bc9df77-8a97-448b-900f-ce7ffbcd6623 to /host/opt/cni/bin/\\\\n2026-02-16T12:53:16Z [verbose] multus-daemon started\\\\n2026-02-16T12:53:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T12:54:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cqh7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v88dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.396648 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.399741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.407917 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.418451 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.431837 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31984faa-a340-44ed-868a-5e6e2a8dab7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 12:53:13.130275 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 12:53:13.130435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 12:53:13.131865 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2505641220/tls.crt::/tmp/serving-cert-2505641220/tls.key\\\\\\\"\\\\nI0216 12:53:13.522578 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 12:53:13.560491 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 12:53:13.560871 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 12:53:13.560994 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 12:53:13.561890 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 12:53:13.571190 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 12:53:13.571217 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571222 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 12:53:13.571226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 12:53:13.571229 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 12:53:13.571233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 12:53:13.571236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 12:53:13.571260 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 12:53:13.575006 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.442467 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41e9ae3c-a9af-4bf8-a9d1-9e1a4108ce19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc969334e691ec566d3ffdba301654e37bbea75aa0ec7453ae57053e436e4934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://954ed311ff927357ca1284957836db823ede68c25e46e045201ad06c48d6fb45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92aa52eca3fd179e8a77c03fa5681ff9c8c573c5ff3cc7f154476df33b28d7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308248888f31712b15d8e419237a8a0f30e1cdfff41bae0cd2840cf6799e41c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.453472 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071403d5-fba4-44ab-a7f4-639b19b7dfe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3c4fde60d7024db19bf7463e891e23ab4ad03222025aa3c38d27649128c421e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e56b3c9786c09b2bdc602dcd68ee371a6df44a454b1e68574c02f6a322501ed9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.468315 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://deb92b4f815477d03702458dc991e1837e544abb43cf09b96f4be0fc473c7d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.481791 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0847e64b0dc2932a7230b29b7b849756f55801c532a736be2a2980e2de5d604b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440524c5369e07ccc10026e440073e9d45e03fe234b7e55274248ba086e1395f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.495326 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a46e0708-a1b9-4055-8abc-b3d8de6e5245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bde9da51c1f0f003077f567db4a3333488163f4008fa2ecf6220105cef17df03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plntj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q4qtj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.502695 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.520754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4734b9dd-f672-4895-86b3-538d9012af9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T12:54:09Z\\\",\\\"message\\\":\\\"_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221368 6809 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221378 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 12:54:09.221379 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 12:54:09.221390 6809 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0216 12:54:09.221390 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T12:54:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rml5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-msmgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.542098 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"541c3fc8-2570-4c03-87b4-65f25ff06131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9c8099bb5eba996bc3d8d2e863bd70633bd9b0254c3fe5821fc4793cf046d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c0eeeb27377d61443f7754bfac1381f13b4f3a82ba264f61d1f9e1f226ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ec7459bbcca61588e290cb35a3f34e0554be0a8ecdb013266b263c0c23ec9cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84842791f89c497895c2a953a0e71d29b46aa338838efc39995fc2b0ab32ca89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad9616d37e41997011d3984ba488307ed05ea1256b99562f50f2536d76cec56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22d375bd81abcdcb3e98158153980ce48ba7e8af92764222e3b5effef93ae716\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed8cdbaffa6d92f7bb70361af17a1a1ac36209166e6da8115c0ed1a5261f3712\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6ca6d609b2a2ecae1283536c5a981a4817d9e704b4e9629190065843360a55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:52:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.558780 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"872ae2f5-5967-4ebe-b05f-148a0f7402f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759707cdfe053a9a2b0fdbeb0489374927b98ba51889e72d9d34e59d4362d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33c7154c3f14003f4b9bafb160385b7dd77d546e23fadce5ef6368612a4efa7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzdl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-grlzn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.574215 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.605664 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.707926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.707999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.708079 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.810896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.811063 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:23 crc kubenswrapper[4740]: I0216 12:54:23.913837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:23Z","lastTransitionTime":"2026-02-16T12:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.017194 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.120373 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.223452 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.286010 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:01:18.786006306 +0000 UTC Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.325892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.428847 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.531761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.532095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.635392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.738909 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.840760 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944354 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944436 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944458 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:24 crc kubenswrapper[4740]: I0216 12:54:24.944506 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:24Z","lastTransitionTime":"2026-02-16T12:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047791 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.047911 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.151168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.253198 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280726 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280771 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.280922 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.280987 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.281060 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.282048 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.282352 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.282533 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.288002 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:34:07.444797284 +0000 UTC Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.288130 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:25 crc kubenswrapper[4740]: E0216 12:54:25.288311 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.356550 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.460908 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.564927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.668729 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771367 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.771761 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.874657 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:25 crc kubenswrapper[4740]: I0216 12:54:25.978290 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:25Z","lastTransitionTime":"2026-02-16T12:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.081291 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.184264 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.287931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.287986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288039 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.288188 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:24:00.989278516 +0000 UTC Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.391759 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.495229 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.598844 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.701670 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.804757 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:26 crc kubenswrapper[4740]: I0216 12:54:26.908719 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:26Z","lastTransitionTime":"2026-02-16T12:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.011308 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.113713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.216597 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.281169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281339 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281475 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281654 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:27 crc kubenswrapper[4740]: E0216 12:54:27.281783 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.288733 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:00:31.744707598 +0000 UTC Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.319471 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.421982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.422081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.523946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.523981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.524036 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.626758 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.729349 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832288 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.832301 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:27 crc kubenswrapper[4740]: I0216 12:54:27.935871 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:27Z","lastTransitionTime":"2026-02-16T12:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038578 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.038587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.140921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.141985 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245882 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.245962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.289216 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:33:10.068429409 +0000 UTC Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.349787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.350103 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.452432 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.555537 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.658949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.659122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.659389 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.763139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.865954 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:28 crc kubenswrapper[4740]: I0216 12:54:28.968529 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:28Z","lastTransitionTime":"2026-02-16T12:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.071771 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.174980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277127 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.277278 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.280658 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.280828 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.281076 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281049 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.281189 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281354 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281537 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:29 crc kubenswrapper[4740]: E0216 12:54:29.281716 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.290552 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:48:47.861583045 +0000 UTC Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.381141 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484218 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.484303 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.586520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.586928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.587406 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.690259 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.793345 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.896239 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.998998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.999013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:29 crc kubenswrapper[4740]: I0216 12:54:29.999023 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:29Z","lastTransitionTime":"2026-02-16T12:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.101964 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204579 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.204595 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.290935 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:26:12.282909712 +0000 UTC Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.307660 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.410321 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.513372 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615706 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.615900 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719204 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.719245 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.821835 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:30 crc kubenswrapper[4740]: I0216 12:54:30.925211 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:30Z","lastTransitionTime":"2026-02-16T12:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.028556 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.131988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.132139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.234943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.235081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280313 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280445 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280316 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.280338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280575 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280694 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.280758 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.291306 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:07:32.456685316 +0000 UTC Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.337888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.440518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.440885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.441309 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.544156 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.604889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.621243 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.626889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.641964 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.646118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.666211 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.670941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.671078 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.689162 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.693742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.714453 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T12:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16811f3b-c2df-4c7d-9862-6b10264a49b2\\\",\\\"systemUUID\\\":\\\"7ed304a0-359f-427d-948c-1ad2fcad2d68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:31Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.714615 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.716864 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.737758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.737927 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:54:31 crc kubenswrapper[4740]: E0216 12:54:31.737995 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs podName:12044a18-c0cd-4ce6-a1f8-45e3c10095fb nodeName:}" failed. No retries permitted until 2026-02-16 12:55:35.737978388 +0000 UTC m=+163.114327119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs") pod "network-metrics-daemon-tcfzx" (UID: "12044a18-c0cd-4ce6-a1f8-45e3c10095fb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.819672 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:31 crc kubenswrapper[4740]: I0216 12:54:31.922425 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:31Z","lastTransitionTime":"2026-02-16T12:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.024570 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.127640 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.230713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.291611 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:11:44.764488761 +0000 UTC Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.333522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436063 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.436142 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.538935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.538994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539025 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.539040 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.641669 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.745523 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.849234 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:32 crc kubenswrapper[4740]: I0216 12:54:32.952561 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:32Z","lastTransitionTime":"2026-02-16T12:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.055223 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.158977 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261822 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.261833 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280135 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280276 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280303 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280563 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.280917 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.280940 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:33 crc kubenswrapper[4740]: E0216 12:54:33.281130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.292564 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:16:19.610496554 +0000 UTC Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.306758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad2ec4df-11e9-4970-bd6b-c258ce2d08bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://504aeef3d407065d117b1e419401d98c14a888154abf6ec3836f34f0c4f1a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e09f7726db62867a14039fd876494ac5968d1e3547a73bfa9e34f260472f3187\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e72d0778d86db7b75699e844577b9ae6e1937767df2af68953aa6981b1e650b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77f8048cddd7d9636fcf512013cbdd95f479c7cb7892250e4e3e5fe58520ad7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6dbe1275dd1d0dfe1e346e074934764696ec1697e260739ff91ea54e2a9c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26d1217fdb654027b4c32d9e1abd2d5b178e32a1e746184012d9f4989c137f96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3378b1c57f7d8851f257655c5663568eef0b62da0c75f4e33e92eb47f531ccf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T12:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T12:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mc7pl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mcb2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.324195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7zs65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6020d2c6-e8f9-4ca7-b6c4-c219193a42e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ac0198c6548b82a18e2394f3ba25cbe8fa500d7bda6d31e410b0ae3e2d921e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T12:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzq7l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7zs65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.340574 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T12:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5cphq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T12:53:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcfzx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T12:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.364764 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.403628 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.403599733 podStartE2EDuration="1m19.403599733s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.38516144 +0000 UTC m=+100.761510231" watchObservedRunningTime="2026-02-16 12:54:33.403599733 +0000 UTC m=+100.779948494" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.455617 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v88dn" podStartSLOduration=80.455589934 podStartE2EDuration="1m20.455589934s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.454174283 +0000 UTC m=+100.830523044" watchObservedRunningTime="2026-02-16 12:54:33.455589934 +0000 UTC m=+100.831938675" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.467337 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.500744 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podStartSLOduration=80.500721564 podStartE2EDuration="1m20.500721564s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.468494035 +0000 UTC m=+100.844842776" watchObservedRunningTime="2026-02-16 12:54:33.500721564 +0000 UTC m=+100.877070305" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.531902 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.531877042 podStartE2EDuration="16.531877042s" podCreationTimestamp="2026-02-16 12:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.530483951 +0000 UTC m=+100.906832672" watchObservedRunningTime="2026-02-16 12:54:33.531877042 +0000 UTC m=+100.908225783" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.550379 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.550352556 podStartE2EDuration="48.550352556s" podCreationTimestamp="2026-02-16 12:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.549832161 +0000 UTC m=+100.926180902" watchObservedRunningTime="2026-02-16 12:54:33.550352556 +0000 UTC m=+100.926701317" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.571845 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.585572 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.585522922 podStartE2EDuration="22.585522922s" podCreationTimestamp="2026-02-16 12:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.568741768 +0000 UTC m=+100.945090499" watchObservedRunningTime="2026-02-16 12:54:33.585522922 +0000 UTC m=+100.961871643" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.644793 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.644774198 podStartE2EDuration="1m15.644774198s" podCreationTimestamp="2026-02-16 12:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.643741997 +0000 UTC m=+101.020090738" watchObservedRunningTime="2026-02-16 12:54:33.644774198 +0000 UTC m=+101.021122919" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.645717 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-grlzn" podStartSLOduration=79.645706805 podStartE2EDuration="1m19.645706805s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.629015054 +0000 UTC m=+101.005363775" watchObservedRunningTime="2026-02-16 12:54:33.645706805 +0000 UTC m=+101.022055526" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.674858 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.777302 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880356 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.880589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.984720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.985911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:33 crc kubenswrapper[4740]: I0216 12:54:33.986012 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:33Z","lastTransitionTime":"2026-02-16T12:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.088959 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.191988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.192126 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.293030 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:16:38.622071528 +0000 UTC Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.294965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.295028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.295102 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.397553 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.499605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.499918 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.500176 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.603713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.706094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808594 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.808755 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912381 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:34 crc kubenswrapper[4740]: I0216 12:54:34.912423 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:34Z","lastTransitionTime":"2026-02-16T12:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.015403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118371 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.118411 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.220920 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281074 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281214 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281455 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281534 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.281699 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.281765 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.282012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:35 crc kubenswrapper[4740]: E0216 12:54:35.282125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.293212 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:07:44.026109747 +0000 UTC Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.322962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426358 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.426428 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.528945 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.631929 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.735981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736054 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.736127 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.840966 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:35 crc kubenswrapper[4740]: I0216 12:54:35.944805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:35Z","lastTransitionTime":"2026-02-16T12:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.048522 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.151156 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.254188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.294346 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:58:25.980995695 +0000 UTC Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.356507 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.459959 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.562768 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.666870 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.769867 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.873504 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.975931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:36 crc kubenswrapper[4740]: I0216 12:54:36.976075 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:36Z","lastTransitionTime":"2026-02-16T12:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.078885 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181365 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.181383 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280573 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280611 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280681 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.280762 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.280953 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281129 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281298 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:37 crc kubenswrapper[4740]: E0216 12:54:37.281548 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.283934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.294902 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:59:31.100000999 +0000 UTC Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.387493 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.490864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.490988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.491890 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594754 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.594903 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.697772 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.800227 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902389 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:37 crc kubenswrapper[4740]: I0216 12:54:37.902509 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:37Z","lastTransitionTime":"2026-02-16T12:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004377 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.004420 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.107263 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210364 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.210447 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.281386 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:38 crc kubenswrapper[4740]: E0216 12:54:38.281543 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-msmgh_openshift-ovn-kubernetes(4734b9dd-f672-4895-86b3-538d9012af9f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.295040 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:35:44.335709445 +0000 UTC Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.312882 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.416186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.518966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.519105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.621712 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.723850 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.826758 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:38 crc kubenswrapper[4740]: I0216 12:54:38.929148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:38Z","lastTransitionTime":"2026-02-16T12:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.032677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.135962 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.238685 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280513 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.280727 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.280922 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.280847 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.281054 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:39 crc kubenswrapper[4740]: E0216 12:54:39.281278 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.295987 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:33:23.201423742 +0000 UTC Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.341838 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445357 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.445434 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.549216 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.651742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754350 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.754381 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856664 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.856674 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:39 crc kubenswrapper[4740]: I0216 12:54:39.959693 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:39Z","lastTransitionTime":"2026-02-16T12:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.062787 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166136 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.166246 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.269617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.296333 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:27:06.994486571 +0000 UTC Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.372992 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.373022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.373048 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.476884 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579312 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579353 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.579371 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682360 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.682404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.785721 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.893797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.996753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:40 crc kubenswrapper[4740]: I0216 12:54:40.997925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:40Z","lastTransitionTime":"2026-02-16T12:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.101336 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204366 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.204446 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.281152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282030 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.282176 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282265 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282432 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282580 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:41 crc kubenswrapper[4740]: E0216 12:54:41.282649 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.297269 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:59:16.36824344 +0000 UTC Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.308974 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.412702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.514987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.515147 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.617560 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.720830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.823963 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850348 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.850605 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T12:54:41Z","lastTransitionTime":"2026-02-16T12:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.897248 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ttqrb" podStartSLOduration=88.897228853 podStartE2EDuration="1m28.897228853s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:33.654515894 +0000 UTC m=+101.030864615" watchObservedRunningTime="2026-02-16 12:54:41.897228853 +0000 UTC m=+109.273577574" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.898178 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5"] Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.898661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900549 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900744 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.900889 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.901406 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.916303 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mcb2z" podStartSLOduration=88.916272294 podStartE2EDuration="1m28.916272294s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:41.915962035 +0000 UTC m=+109.292310806" watchObservedRunningTime="2026-02-16 12:54:41.916272294 +0000 UTC m=+109.292621055" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.931876 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7zs65" podStartSLOduration=88.931798072 podStartE2EDuration="1m28.931798072s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:41.931303017 +0000 UTC m=+109.307651748" watchObservedRunningTime="2026-02-16 12:54:41.931798072 +0000 UTC m=+109.308146823" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949075 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949223 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949296 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:41 crc kubenswrapper[4740]: I0216 12:54:41.949422 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.050618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-service-ca\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.051984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.060517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.075774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-msxk5\" (UID: \"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.215016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.297468 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:32:35.968072135 +0000 UTC Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.297792 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.310132 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.905149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" event={"ID":"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36","Type":"ContainerStarted","Data":"498fbaf5c4a72f772692c7e299cfcef7ee483f4ba623b254c2333f1ef05560a6"} Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.905239 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" event={"ID":"bbd335c2-9d57-4f0a-b8d4-46ccae9ccd36","Type":"ContainerStarted","Data":"8205a873042a570f5ceca69c064fbf3f4fb331d981ac467560a634d8a9de7340"} Feb 16 12:54:42 crc kubenswrapper[4740]: I0216 12:54:42.926959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-msxk5" podStartSLOduration=89.926938376 podStartE2EDuration="1m29.926938376s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:42.926684118 +0000 UTC m=+110.303032839" watchObservedRunningTime="2026-02-16 12:54:42.926938376 +0000 UTC m=+110.303287107" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.280972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285370 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.285955 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:43 crc kubenswrapper[4740]: I0216 12:54:43.285108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:43 crc kubenswrapper[4740]: E0216 12:54:43.286304 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.280710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.280744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:45 crc kubenswrapper[4740]: I0216 12:54:45.281411 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281533 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281620 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:45 crc kubenswrapper[4740]: E0216 12:54:45.281685 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280903 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.280893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281053 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281153 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281249 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.281360 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.925610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926519 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/0.log" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926606 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" exitCode=1 Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926666 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb"} Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.926801 4740 scope.go:117] "RemoveContainer" containerID="a91fa7a303d5e7ce116e69f01912f8d88f2a11ceb0b506cc26f4c9cf78e5f5bd" Feb 16 12:54:47 crc kubenswrapper[4740]: I0216 12:54:47.927515 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 12:54:47 crc kubenswrapper[4740]: E0216 12:54:47.929291 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 12:54:48 crc kubenswrapper[4740]: I0216 12:54:48.931133 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280712 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:49 crc kubenswrapper[4740]: I0216 12:54:49.280689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.280858 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.280915 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.281097 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:49 crc kubenswrapper[4740]: E0216 12:54:49.281242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281179 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281490 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281595 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.281794 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.281931 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:51 crc kubenswrapper[4740]: I0216 12:54:51.282301 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:51 crc kubenswrapper[4740]: E0216 12:54:51.282406 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.281276 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.944047 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.946851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerStarted","Data":"0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d"} Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.947411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:54:52 crc kubenswrapper[4740]: I0216 12:54:52.972412 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podStartSLOduration=99.972393729 podStartE2EDuration="1m39.972393729s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:54:52.971712689 +0000 UTC m=+120.348061400" watchObservedRunningTime="2026-02-16 12:54:52.972393729 +0000 UTC m=+120.348742450" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.157133 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.157268 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.157372 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.270500 4740 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.280278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281164 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.281228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:53 crc kubenswrapper[4740]: I0216 12:54:53.281286 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281386 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.281475 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:53 crc kubenswrapper[4740]: E0216 12:54:53.398326 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:54:54 crc kubenswrapper[4740]: I0216 12:54:54.280552 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:54 crc kubenswrapper[4740]: E0216 12:54:54.280760 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.280356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.280362 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.280579 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.280699 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:55 crc kubenswrapper[4740]: I0216 12:54:55.282542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:55 crc kubenswrapper[4740]: E0216 12:54:55.282753 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:56 crc kubenswrapper[4740]: I0216 12:54:56.280357 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:56 crc kubenswrapper[4740]: E0216 12:54:56.280615 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280623 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:57 crc kubenswrapper[4740]: I0216 12:54:57.280846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282362 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282603 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:57 crc kubenswrapper[4740]: E0216 12:54:57.282527 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:54:58 crc kubenswrapper[4740]: I0216 12:54:58.280822 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:54:58 crc kubenswrapper[4740]: E0216 12:54:58.280958 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:54:58 crc kubenswrapper[4740]: E0216 12:54:58.400119 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280391 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.280947 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280584 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.281986 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:54:59 crc kubenswrapper[4740]: I0216 12:54:59.280444 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:54:59 crc kubenswrapper[4740]: E0216 12:54:59.282258 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:00 crc kubenswrapper[4740]: I0216 12:55:00.281197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:00 crc kubenswrapper[4740]: E0216 12:55:00.281398 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:01 crc kubenswrapper[4740]: I0216 12:55:01.280478 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280608 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280890 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:01 crc kubenswrapper[4740]: E0216 12:55:01.280874 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:02 crc kubenswrapper[4740]: I0216 12:55:02.280735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:02 crc kubenswrapper[4740]: E0216 12:55:02.280912 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.281213 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.282514 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.283070 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.283112 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283334 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.283468 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:03 crc kubenswrapper[4740]: E0216 12:55:03.401560 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.988417 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 12:55:03 crc kubenswrapper[4740]: I0216 12:55:03.988756 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c"} Feb 16 12:55:04 crc kubenswrapper[4740]: I0216 12:55:04.280983 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:04 crc kubenswrapper[4740]: E0216 12:55:04.281109 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.280948 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.280953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281550 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:05 crc kubenswrapper[4740]: I0216 12:55:05.281009 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281780 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:05 crc kubenswrapper[4740]: E0216 12:55:05.281862 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:06 crc kubenswrapper[4740]: I0216 12:55:06.280156 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:06 crc kubenswrapper[4740]: E0216 12:55:06.280327 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280218 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280359 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280387 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:07 crc kubenswrapper[4740]: I0216 12:55:07.280450 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280552 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 12:55:07 crc kubenswrapper[4740]: E0216 12:55:07.280721 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 12:55:08 crc kubenswrapper[4740]: I0216 12:55:08.280978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:08 crc kubenswrapper[4740]: E0216 12:55:08.281247 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcfzx" podUID="12044a18-c0cd-4ce6-a1f8-45e3c10095fb" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280461 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280520 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.280640 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285475 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285799 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:55:09 crc kubenswrapper[4740]: I0216 12:55:09.285908 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.280941 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.282336 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:55:10 crc kubenswrapper[4740]: I0216 12:55:10.282986 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.282313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.334357 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.335332 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-65j55"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.335895 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.336394 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.337240 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.341933 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.342925 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.343974 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345238 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.345668 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.346138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.348974 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349077 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349146 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349013 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349217 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349291 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349463 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349756 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.349795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.358788 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.359260 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.359539 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360116 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360257 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.360405 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.361100 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.361973 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.362604 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.364875 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.365337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.365949 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.366765 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.368070 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.368592 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.369095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.369226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.370108 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.370624 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.371991 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.372310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.372842 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.377864 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.379492 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380062 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380128 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380198 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380370 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380437 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380652 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.380965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381036 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381154 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381349 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381422 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381596 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381768 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.381977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382336 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382659 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382760 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.382890 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383079 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383187 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.383095 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.385186 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.386262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388255 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388848 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.388878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.390950 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391218 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391641 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.391998 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.402447 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.403127 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.403643 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404022 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404313 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392001 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.392157 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.404828 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.405430 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.406469 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.406979 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410603 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410742 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.410993 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411166 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411293 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411569 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.411980 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.412172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.413536 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.415891 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.416551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.418743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.419146 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.420332 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421009 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421750 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.421930 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422219 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422395 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.422569 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.425604 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.425843 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427247 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427388 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427520 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.426106 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.427928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429326 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429648 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.429849 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.430544 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.437051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.505761 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.632839 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633557 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633717 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.633932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.635598 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636357 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636566 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636604 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636692 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636702 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637212 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-auth-proxy-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.636719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637373 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637537 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-config\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637587 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637635 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637744 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.637862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638671 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.638755 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-audit-policies\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d07265-6abd-44a7-83c5-112c01083143-audit-dir\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-config\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.640865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.641112 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.641135 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d07265-6abd-44a7-83c5-112c01083143-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-machine-approver-tls\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643535 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.643752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-etcd-client\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493225bc-7119-4eec-9314-aa63e475d061-serving-cert\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644394 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-42rhd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.644835 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645141 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645469 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.645509 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646875 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.646885 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647018 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647098 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647114 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647403 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647512 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647558 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.647662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.648504 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.648737 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649118 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-serving-cert\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649870 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.649971 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.650036 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.650672 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d07265-6abd-44a7-83c5-112c01083143-encryption-config\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651353 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.651448 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.653958 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.653996 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/83add687-ddae-4960-8e05-c81bc891b8f0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.654547 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.656395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658074 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658262 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.658701 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.660252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663032 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663517 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.663980 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.664037 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.664542 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.666088 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.666519 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.668919 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.670488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83add687-ddae-4960-8e05-c81bc891b8f0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.671336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/493225bc-7119-4eec-9314-aa63e475d061-trusted-ca\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.671929 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.673022 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.675956 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.676870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.677095 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.678883 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.679075 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.680463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.682718 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.683610 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.684702 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.684911 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.685437 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.685848 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.686741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.686771 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.687394 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.687658 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.688293 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.688638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.689063 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.690174 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.690680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.691214 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.691992 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.692386 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.693263 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.693443 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.694142 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.694438 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.695466 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.696490 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.697768 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.698978 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.698985 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.700730 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.702151 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.703506 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.704922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.706196 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.707494 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.708702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.709855 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.710994 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.712132 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.713157 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.714886 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-njwjd"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715677 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.715792 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.716939 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.718193 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.718661 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.719288 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.720374 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.721501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.722860 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.724285 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.725656 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.727613 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.730262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.737623 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.739705 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.743037 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749045 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749097 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.749280 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.750437 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.758923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.778640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.798823 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.818640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.839673 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.858758 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.879233 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.899919 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.918964 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.938855 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.959679 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.979328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:55:12 crc kubenswrapper[4740]: I0216 12:55:12.999854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.020910 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.062858 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66mmd\" (UniqueName: \"kubernetes.io/projected/24d07265-6abd-44a7-83c5-112c01083143-kube-api-access-66mmd\") pod \"apiserver-7bbb656c7d-gbsbz\" (UID: \"24d07265-6abd-44a7-83c5-112c01083143\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.078362 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgcqf\" (UniqueName: \"kubernetes.io/projected/3c88b213-e85e-4b8b-a9ee-f0f3224716ae-kube-api-access-qgcqf\") pod \"openshift-apiserver-operator-796bbdcf4f-gl7j5\" (UID: \"3c88b213-e85e-4b8b-a9ee-f0f3224716ae\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.104143 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrsl\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-kube-api-access-gzrsl\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.108715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.113917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznzh\" (UniqueName: \"kubernetes.io/projected/29475a43-ba44-4a2c-8cc9-08da7b1f75c6-kube-api-access-hznzh\") pod \"machine-approver-56656f9798-65j55\" (UID: \"29475a43-ba44-4a2c-8cc9-08da7b1f75c6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.133065 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcfns\" (UniqueName: \"kubernetes.io/projected/91631c8c-d18f-44d6-9919-0b5fe8e8d45b-kube-api-access-pcfns\") pod \"downloads-7954f5f757-m9529\" (UID: \"91631c8c-d18f-44d6-9919-0b5fe8e8d45b\") " pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.154467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"controller-manager-879f6c89f-tdlx8\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.159198 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.178632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.198871 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.229338 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.239431 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.279265 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.285859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.293377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc2r7\" (UniqueName: \"kubernetes.io/projected/493225bc-7119-4eec-9314-aa63e475d061-kube-api-access-gc2r7\") pod \"console-operator-58897d9998-c86mj\" (UID: \"493225bc-7119-4eec-9314-aa63e475d061\") " pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:13 crc kubenswrapper[4740]: W0216 12:55:13.293681 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d07265_6abd_44a7_83c5_112c01083143.slice/crio-5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c WatchSource:0}: Error finding container 5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c: Status 404 returned error can't find the container with id 5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.310534 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.318872 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.320604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83add687-ddae-4960-8e05-c81bc891b8f0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tr9dr\" (UID: \"83add687-ddae-4960-8e05-c81bc891b8f0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.331369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.339759 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.352201 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.358635 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.362606 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.380442 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.387845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.399533 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.420249 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.429660 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.440177 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:55:13 crc kubenswrapper[4740]: W0216 12:55:13.442844 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c88b213_e85e_4b8b_a9ee_f0f3224716ae.slice/crio-6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464 WatchSource:0}: Error finding container 6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464: Status 404 returned error can't find the container with id 6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464 Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.459389 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.482051 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.492929 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.500562 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.519745 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.542637 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.559486 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.578710 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.599104 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.639253 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.659215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663499 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663560 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663593 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663622 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663667 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663695 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663723 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663738 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663784 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663877 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663910 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663926 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663965 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663982 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.663997 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664078 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664268 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664304 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664365 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664382 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.664398 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.164385895 +0000 UTC m=+141.540734616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664427 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664472 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664497 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.664603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.677934 4740 request.go:700] Waited for 1.010998814s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.678859 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.699146 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.719031 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.739360 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.759208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.765803 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.765932 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.26590605 +0000 UTC m=+141.642254771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.765985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766025 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766050 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766231 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766334 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766368 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766549 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766942 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.766985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767041 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767157 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767202 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767224 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767333 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-config\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767513 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-node-pullsecrets\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767655 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.767978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768036 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768103 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768456 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768885 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.769050 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.269038038 +0000 UTC m=+141.645386759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.769105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.768561 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.769330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770429 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-config\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770723 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770767 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.770923 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771620 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771775 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.771892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-etcd-serving-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.772973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773013 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.773516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774122 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-audit\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.774692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/643bf47c-570f-4204-adb1-512cd9e914b8-images\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775455 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775701 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775736 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.775955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-service-ca-bundle\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb14491a-6043-446a-8b10-626838253345-audit-dir\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.777233 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.794792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-etcd-client\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.795570 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.796428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.797224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.797701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.798076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.798718 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/643bf47c-570f-4204-adb1-512cd9e914b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.799224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.822017 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-serving-cert\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.822668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.776105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823040 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823110 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823152 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823229 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823280 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823348 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823425 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823456 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823474 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823550 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823567 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.823986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.824545 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-config\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.824935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb14491a-6043-446a-8b10-626838253345-image-import-ca\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.825357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/28956c81-f1c4-471c-9564-5747a0a0aaf8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.826389 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.827325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.828086 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-service-ca\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.828301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.830210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.830581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.832261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833007 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-serving-cert\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-serving-cert\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833432 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.833792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-etcd-client\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.834193 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.834298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.835149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb14491a-6043-446a-8b10-626838253345-encryption-config\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.838749 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-serving-cert\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.839120 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.847687 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-c86mj"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.860337 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.861309 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m9529"] Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.862198 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.880138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.899510 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.919666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.924575 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.424551251 +0000 UTC m=+141.800899972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924716 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924753 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924843 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.924980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-socket-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: E0216 12:55:13.925286 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.425270954 +0000 UTC m=+141.801619675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925500 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925685 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-mountpoint-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.925988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926209 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926421 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926625 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-plugins-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.926499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927341 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927498 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e839d-cd94-49e9-a386-e90820fceb5c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927582 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927834 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927794 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-csi-data-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.927989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928067 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928159 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928664 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91338fe1-147f-41ff-9816-8cdcb7d1a08b-service-ca-bundle\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.928756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ad6fcf0b-0176-4920-93de-563a8f4af054-trusted-ca\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7393aab-0211-49f3-b683-3cf11cae93c6-config\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.929762 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930023 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930143 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930219 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930243 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930328 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b43cb6-05b8-4834-b187-1377370007fd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.930677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/980ab133-4d29-4d9e-b359-bf3cb06fbba3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931821 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7393aab-0211-49f3-b683-3cf11cae93c6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.931984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-stats-auth\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f50997-a877-4d3f-9cf7-df6d254b48f5-config\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932452 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-registration-dir\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.932617 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad6fcf0b-0176-4920-93de-563a8f4af054-metrics-tls\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.933248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934355 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.934925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-tmpfs\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.935258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-images\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.936318 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f50997-a877-4d3f-9cf7-df6d254b48f5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.936877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/382ac2b0-b15a-412a-b8fb-e61844137cb1-metrics-tls\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e839d-cd94-49e9-a386-e90820fceb5c-proxy-tls\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a760deb-c84d-4da0-a20b-dac7b17c24c7-proxy-tls\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.937651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-srv-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.938834 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-profile-collector-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.938998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.941571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b43cb6-05b8-4834-b187-1377370007fd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.941647 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a760deb-c84d-4da0-a20b-dac7b17c24c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.942327 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f465ee4-90ff-4746-a90f-1e964b6c4d05-srv-cert\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.947460 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eef055f-7504-4f20-817e-afcd1bb6f996-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949152 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-default-certificate\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91338fe1-147f-41ff-9816-8cdcb7d1a08b-metrics-certs\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad3a4715-2249-418d-b03e-bd5aac43089e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.949800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d5f0e5d1-897e-4200-8ea7-716faf71db56-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.950608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980ab133-4d29-4d9e-b359-bf3cb06fbba3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.960544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.979458 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.990665 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:13 crc kubenswrapper[4740]: I0216 12:55:13.999082 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.027159 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.034040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m9529" event={"ID":"91631c8c-d18f-44d6-9919-0b5fe8e8d45b","Type":"ContainerStarted","Data":"61bc7a8f1f7d81fd976e58331fef837ba11654d5e85e6a6d26c678a0bcdd6892"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.034801 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.035073 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.535027198 +0000 UTC m=+141.911376069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.035138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.038036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c86mj" event={"ID":"493225bc-7119-4eec-9314-aa63e475d061","Type":"ContainerStarted","Data":"e946c237141087ec0151e70d67bf7411d255f2e97363c60380d7d87b7356a18d"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.038925 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerStarted","Data":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerStarted","Data":"29e6c5dab661956c91b79a723fe07411f83f7e5c787f55a2531731add29989ac"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.040420 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042182 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" event={"ID":"3c88b213-e85e-4b8b-a9ee-f0f3224716ae","Type":"ContainerStarted","Data":"9d258440943667a765c8762344a0a70eb2706cbb4ab1c832f0315e738691b4e0"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" event={"ID":"3c88b213-e85e-4b8b-a9ee-f0f3224716ae","Type":"ContainerStarted","Data":"6f878f934a0e35be68b7540ca3c5ee14dd8a422b0ffc200676117e04446ed464"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042897 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tdlx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.042944 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.044367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" event={"ID":"83add687-ddae-4960-8e05-c81bc891b8f0","Type":"ContainerStarted","Data":"bf7d7d675b0fc61dcb8cb818b6eedaeae4c546f6fe5e90ade90d97e0ee3a191b"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"97d0463c95281f8c84d643e76732c752bf8d1645784068dd5914f0030e554046"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046698 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"d9c8dd9c05d58eac5cb9a434a182b3d9269de7b80a871500be8f0f696a0d7d18"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.046717 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" event={"ID":"29475a43-ba44-4a2c-8cc9-08da7b1f75c6","Type":"ContainerStarted","Data":"a32fdf0c567cc8ab716d31a72f0c0b5840dfab8daea6115187ad0343a8bd7fcc"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050416 4740 generic.go:334] "Generic (PLEG): container finished" podID="24d07265-6abd-44a7-83c5-112c01083143" containerID="5c2b5cf5627228da9bde0c8638cd742898b9304d143fe5621f94cf73a243585c" exitCode=0 Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerDied","Data":"5c2b5cf5627228da9bde0c8638cd742898b9304d143fe5621f94cf73a243585c"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.050536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerStarted","Data":"5e765d58c0de1d8c6a644c38e194737794a372aec0e009d53df75e960e02e30c"} Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.058523 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.070999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-webhook-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.071153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-apiservice-cert\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.079388 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.090239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.098905 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.118988 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.130199 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/456feb2b-91a3-42ae-aa03-accd55804c79-signing-key\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.136154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.137244 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.637190462 +0000 UTC m=+142.013539183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.139535 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.149490 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/456feb2b-91a3-42ae-aa03-accd55804c79-signing-cabundle\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.159788 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.180299 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.198666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.219544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.225742 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.238854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.238980 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.239129 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.73910898 +0000 UTC m=+142.115457691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.239827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.240161 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.740147502 +0000 UTC m=+142.116496313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.250169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74217d18-e17c-469b-a492-49b62f2f96c9-serving-cert\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.259775 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.278753 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.299597 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.305961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74217d18-e17c-469b-a492-49b62f2f96c9-config\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.319341 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.340542 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.340704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.340789 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.840767558 +0000 UTC m=+142.217116279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.342131 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.342985 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.842961138 +0000 UTC m=+142.219309889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.350948 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-cert\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.358384 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.378754 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.399715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.419529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.439314 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.443302 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.443453 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.94343293 +0000 UTC m=+142.319781651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.443777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.444106 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:14.94409807 +0000 UTC m=+142.320446791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.460864 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.478949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.498721 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.510577 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-node-bootstrap-token\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.518539 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.538431 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.543604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d24bd6df-1e79-4e8b-a71a-c3f07422af23-certs\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.544739 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.544906 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.044872091 +0000 UTC m=+142.421220812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.544975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.545321 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.045284714 +0000 UTC m=+142.421633455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.545520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-metrics-tls\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.559544 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.578564 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.633865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"route-controller-manager-6576b87f9c-wrjdd\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.642733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-config-volume\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.646130 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.646272 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.146251221 +0000 UTC m=+142.522599942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.646433 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.646752 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.146743797 +0000 UTC m=+142.523092508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.652958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jj4\" (UniqueName: \"kubernetes.io/projected/f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc-kube-api-access-h2jj4\") pod \"etcd-operator-b45778765-s69f4\" (UID: \"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.672588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p7j\" (UniqueName: \"kubernetes.io/projected/28956c81-f1c4-471c-9564-5747a0a0aaf8-kube-api-access-t2p7j\") pod \"cluster-samples-operator-665b6dd947-8dt95\" (UID: \"28956c81-f1c4-471c-9564-5747a0a0aaf8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.713024 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvfp\" (UniqueName: \"kubernetes.io/projected/643bf47c-570f-4204-adb1-512cd9e914b8-kube-api-access-8lvfp\") pod \"machine-api-operator-5694c8668f-jcv2d\" (UID: \"643bf47c-570f-4204-adb1-512cd9e914b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.721623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs95\" (UniqueName: \"kubernetes.io/projected/a31d7595-0ee7-48b5-9f1f-19907ed7c92b-kube-api-access-fvs95\") pod \"openshift-config-operator-7777fb866f-5mhmc\" (UID: \"a31d7595-0ee7-48b5-9f1f-19907ed7c92b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.736697 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.744592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.747477 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.747611 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.24758858 +0000 UTC m=+142.623937301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.748041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.748376 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.248362665 +0000 UTC m=+142.624711386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.753079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82b2k\" (UniqueName: \"kubernetes.io/projected/fb14491a-6043-446a-8b10-626838253345-kube-api-access-82b2k\") pod \"apiserver-76f77b778f-nqbws\" (UID: \"fb14491a-6043-446a-8b10-626838253345\") " pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.773078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"oauth-openshift-558db77b4-wknn7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.788281 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.795986 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.796149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4tc\" (UniqueName: \"kubernetes.io/projected/f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca-kube-api-access-gl4tc\") pod \"authentication-operator-69f744f599-rh4w9\" (UID: \"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.803272 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.815076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"console-f9d7485db-gctsd\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.819893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.836782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.837437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.888752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.889365 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.889923 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.389903498 +0000 UTC m=+142.766252219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.890017 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.901164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2qz\" (UniqueName: \"kubernetes.io/projected/6f465ee4-90ff-4746-a90f-1e964b6c4d05-kube-api-access-qr2qz\") pod \"catalog-operator-68c6474976-bl8xk\" (UID: \"6f465ee4-90ff-4746-a90f-1e964b6c4d05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.915371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fld5q\" (UniqueName: \"kubernetes.io/projected/2dc85ee1-e9d1-4d68-b953-30d83f8e7aef-kube-api-access-fld5q\") pod \"migrator-59844c95c7-272mp\" (UID: \"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.916102 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.920743 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/980ab133-4d29-4d9e-b359-bf3cb06fbba3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ljpv\" (UID: \"980ab133-4d29-4d9e-b359-bf3cb06fbba3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.926524 4740 request.go:700] Waited for 1.000525883s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.926845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.945425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4dx\" (UniqueName: \"kubernetes.io/projected/74217d18-e17c-469b-a492-49b62f2f96c9-kube-api-access-bb4dx\") pod \"service-ca-operator-777779d784-hxhv8\" (UID: \"74217d18-e17c-469b-a492-49b62f2f96c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.945540 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5dd\" (UniqueName: \"kubernetes.io/projected/d5f0e5d1-897e-4200-8ea7-716faf71db56-kube-api-access-7r5dd\") pod \"olm-operator-6b444d44fb-5cgnk\" (UID: \"d5f0e5d1-897e-4200-8ea7-716faf71db56\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.972482 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.976566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7393aab-0211-49f3-b683-3cf11cae93c6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nrwzx\" (UID: \"a7393aab-0211-49f3-b683-3cf11cae93c6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.978176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdrk\" (UniqueName: \"kubernetes.io/projected/d24bd6df-1e79-4e8b-a71a-c3f07422af23-kube-api-access-hwdrk\") pod \"machine-config-server-njwjd\" (UID: \"d24bd6df-1e79-4e8b-a71a-c3f07422af23\") " pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:14 crc kubenswrapper[4740]: I0216 12:55:14.992193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:14 crc kubenswrapper[4740]: E0216 12:55:14.992565 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.492553419 +0000 UTC m=+142.868902140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:14.999731 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ml9\" (UniqueName: \"kubernetes.io/projected/92938f98-5bd3-49e2-be2d-65b0fd5d0c12-kube-api-access-d8ml9\") pod \"package-server-manager-789f6589d5-9j2xl\" (UID: \"92938f98-5bd3-49e2-be2d-65b0fd5d0c12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.011363 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.012226 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.021851 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.047346 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jcv2d"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.057140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m9529" event={"ID":"91631c8c-d18f-44d6-9919-0b5fe8e8d45b","Type":"ContainerStarted","Data":"f63c862e8b008a3fc4588c15a9f797e9951ebed9f127723332e86bbc95d3ac25"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.059049 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060555 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060651 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.060943 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.070145 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.075467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s29x\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-kube-api-access-7s29x\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.075696 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrt2\" (UniqueName: \"kubernetes.io/projected/d2b43cb6-05b8-4834-b187-1377370007fd-kube-api-access-psrt2\") pod \"openshift-controller-manager-operator-756b6f6bc6-nq5df\" (UID: \"d2b43cb6-05b8-4834-b187-1377370007fd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.078208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"marketplace-operator-79b997595-d5vhg\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.078925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c975g\" (UniqueName: \"kubernetes.io/projected/1a760deb-c84d-4da0-a20b-dac7b17c24c7-kube-api-access-c975g\") pod \"machine-config-operator-74547568cd-7t4lr\" (UID: \"1a760deb-c84d-4da0-a20b-dac7b17c24c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.079423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-c86mj" event={"ID":"493225bc-7119-4eec-9314-aa63e475d061","Type":"ContainerStarted","Data":"da731b7a5bd768dea2648d59618949502bbbfdc567d9ff781bf5f89b61460664"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.080206 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102409 4740 patch_prober.go:28] interesting pod/console-operator-58897d9998-c86mj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102517 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podUID="493225bc-7119-4eec-9314-aa63e475d061" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.102615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.102967 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.602948152 +0000 UTC m=+142.979296873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.104332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.104540 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.604523812 +0000 UTC m=+142.980872533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.107043 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" event={"ID":"83add687-ddae-4960-8e05-c81bc891b8f0","Type":"ContainerStarted","Data":"9e7b4d8a5c91c86b676c069a6d02d853991d87d4312d1824c0c242276afab1b8"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.108373 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjf7\" (UniqueName: \"kubernetes.io/projected/0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5-kube-api-access-gqjf7\") pod \"ingress-canary-5fhjt\" (UID: \"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5\") " pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.113393 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-njwjd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.124681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" event={"ID":"24d07265-6abd-44a7-83c5-112c01083143","Type":"ContainerStarted","Data":"9d079a0caaf0498fc86a38d66c9cb00cea06bc8fc9b341cf38e0c1f762903f42"} Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.128548 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tdlx8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.128602 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.129009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59b2l\" (UniqueName: \"kubernetes.io/projected/f3fad4e9-9b8b-4c49-ad3d-9c80525475fc-kube-api-access-59b2l\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzjq5\" (UID: \"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.146901 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngkz\" (UniqueName: \"kubernetes.io/projected/ad3a4715-2249-418d-b03e-bd5aac43089e-kube-api-access-tngkz\") pod \"multus-admission-controller-857f4d67dd-q8lfc\" (UID: \"ad3a4715-2249-418d-b03e-bd5aac43089e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.148613 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nqbws"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.154779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad6fcf0b-0176-4920-93de-563a8f4af054-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9h8v8\" (UID: \"ad6fcf0b-0176-4920-93de-563a8f4af054\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.177635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2pvn\" (UniqueName: \"kubernetes.io/projected/2eef055f-7504-4f20-817e-afcd1bb6f996-kube-api-access-g2pvn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m9krp\" (UID: \"2eef055f-7504-4f20-817e-afcd1bb6f996\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.206438 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.207336 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.707305476 +0000 UTC m=+143.083654267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.213705 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.232654 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.232752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtf4q\" (UniqueName: \"kubernetes.io/projected/bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05-kube-api-access-vtf4q\") pod \"packageserver-d55dfcdfc-n92bx\" (UID: \"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.233847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm2f\" (UniqueName: \"kubernetes.io/projected/e89bf85c-b3ee-4ac6-a904-346cba8dbb49-kube-api-access-zvm2f\") pod \"csi-hostpathplugin-h95tf\" (UID: \"e89bf85c-b3ee-4ac6-a904-346cba8dbb49\") " pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.238603 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.240328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.242713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2gw\" (UniqueName: \"kubernetes.io/projected/382ac2b0-b15a-412a-b8fb-e61844137cb1-kube-api-access-sw2gw\") pod \"dns-operator-744455d44c-5tlhr\" (UID: \"382ac2b0-b15a-412a-b8fb-e61844137cb1\") " pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.248157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.253903 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3c2258_4f58_414c_a893_c721b5ac9c03.slice/crio-fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447 WatchSource:0}: Error finding container fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447: Status 404 returned error can't find the container with id fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.267551 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.279644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhlc\" (UniqueName: \"kubernetes.io/projected/91338fe1-147f-41ff-9816-8cdcb7d1a08b-kube-api-access-zrhlc\") pod \"router-default-5444994796-42rhd\" (UID: \"91338fe1-147f-41ff-9816-8cdcb7d1a08b\") " pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.280601 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7wp\" (UniqueName: \"kubernetes.io/projected/456feb2b-91a3-42ae-aa03-accd55804c79-kube-api-access-8k7wp\") pod \"service-ca-9c57cc56f-2wzdf\" (UID: \"456feb2b-91a3-42ae-aa03-accd55804c79\") " pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.281324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.290022 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb14491a_6043_446a_8b10_626838253345.slice/crio-c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7 WatchSource:0}: Error finding container c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7: Status 404 returned error can't find the container with id c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.296853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"collect-profiles-29520765-mt89v\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.298522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.313970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.316310 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.816291716 +0000 UTC m=+143.192640437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.324108 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2f50997-a877-4d3f-9cf7-df6d254b48f5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hkbdj\" (UID: \"b2f50997-a877-4d3f-9cf7-df6d254b48f5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.329464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.336211 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.343468 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rh4w9"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.344003 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.348065 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxng9\" (UniqueName: \"kubernetes.io/projected/8f4e839d-cd94-49e9-a386-e90820fceb5c-kube-api-access-sxng9\") pod \"machine-config-controller-84d6567774-7bdwm\" (UID: \"8f4e839d-cd94-49e9-a386-e90820fceb5c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.351900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.357063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnz4r\" (UniqueName: \"kubernetes.io/projected/d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e-kube-api-access-hnz4r\") pod \"dns-default-28sp5\" (UID: \"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e\") " pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.378321 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fhjt" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.383552 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.404143 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.415734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.415893 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.915870429 +0000 UTC m=+143.292219150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.416553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.417079 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:15.917066037 +0000 UTC m=+143.293414758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.422554 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.425132 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc"] Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.491510 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda31d7595_0ee7_48b5_9f1f_19907ed7c92b.slice/crio-f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd WatchSource:0}: Error finding container f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd: Status 404 returned error can't find the container with id f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.517697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.518117 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.018100246 +0000 UTC m=+143.394448967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.525442 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.537307 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.555734 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.577354 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.577414 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.579332 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.589837 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.619922 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.620342 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.120322712 +0000 UTC m=+143.496671433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.634218 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.723419 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s69f4"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.724299 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.729313 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.729963 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.229942532 +0000 UTC m=+143.606291263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: W0216 12:55:15.744466 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980ab133_4d29_4d9e_b359_bf3cb06fbba3.slice/crio-cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422 WatchSource:0}: Error finding container cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422: Status 404 returned error can't find the container with id cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422 Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.764403 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.777679 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.796181 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.831553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.831938 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.331924261 +0000 UTC m=+143.708272982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.898620 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.928397 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl"] Feb 16 12:55:15 crc kubenswrapper[4740]: I0216 12:55:15.933755 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:15 crc kubenswrapper[4740]: E0216 12:55:15.934865 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.43483558 +0000 UTC m=+143.811184301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.015750 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gl7j5" podStartSLOduration=123.015730045 podStartE2EDuration="2m3.015730045s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.014415044 +0000 UTC m=+143.390763765" watchObservedRunningTime="2026-02-16 12:55:16.015730045 +0000 UTC m=+143.392078776" Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.028064 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eef055f_7504_4f20_817e_afcd1bb6f996.slice/crio-0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf WatchSource:0}: Error finding container 0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf: Status 404 returned error can't find the container with id 0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.037564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.038521 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.538501442 +0000 UTC m=+143.914850163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.040552 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92938f98_5bd3_49e2_be2d_65b0fd5d0c12.slice/crio-180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd WatchSource:0}: Error finding container 180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd: Status 404 returned error can't find the container with id 180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.117736 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tr9dr" podStartSLOduration=123.117715555 podStartE2EDuration="2m3.117715555s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.063463457 +0000 UTC m=+143.439812168" watchObservedRunningTime="2026-02-16 12:55:16.117715555 +0000 UTC m=+143.494064276" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.136784 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.138299 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.139480 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.639462079 +0000 UTC m=+144.015810800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.178588 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5tlhr"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.189888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q8lfc"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.241329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.242680 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.742661876 +0000 UTC m=+144.119010597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.258340 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" event={"ID":"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca","Type":"ContainerStarted","Data":"bf0e44a27587bc7fb33843c6462760b3b44d3c507f8b13805f5414a1f648babe"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.266597 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerStarted","Data":"ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.266656 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerStarted","Data":"fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.268299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.269962 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wrjdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.270016 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.277175 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.286232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" event={"ID":"6f465ee4-90ff-4746-a90f-1e964b6c4d05","Type":"ContainerStarted","Data":"766f4abd04d39551a6838130123deb458c86dbe4df8e515db1ffed9b714c645d"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.301984 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.302147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"3fa93212d09678929fe979c3b81c37a4dea5659d5b87e6ecb28d6c5abf3f21af"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.314311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" event={"ID":"74217d18-e17c-469b-a492-49b62f2f96c9","Type":"ContainerStarted","Data":"7d6e773965cc226dfbf4e70cb2da1ffd37af20003c9cd7fd5b91cacdc47ec476"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.315805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" event={"ID":"980ab133-4d29-4d9e-b359-bf3cb06fbba3","Type":"ContainerStarted","Data":"cf9be9976faac3cb35ea4d86e6781bea2aa469c34c6fc8abedd04b20c4d5c422"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.316651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" event={"ID":"d5f0e5d1-897e-4200-8ea7-716faf71db56","Type":"ContainerStarted","Data":"6bc48c70d3f7ed96c98294a637f1bf5dd4a1152f3c164513dddb10226ae850ad"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.325524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" event={"ID":"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc","Type":"ContainerStarted","Data":"32061023c32e4372677caf448a149aad546020eecb225c7e2dc4ef293d6c3732"} Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.334321 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3a4715_2249_418d_b03e_bd5aac43089e.slice/crio-c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab WatchSource:0}: Error finding container c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab: Status 404 returned error can't find the container with id c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.334371 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-42rhd" event={"ID":"91338fe1-147f-41ff-9816-8cdcb7d1a08b","Type":"ContainerStarted","Data":"8c57b07157a8e862e051683f72f8f5e2c2dc05d88b4b049dfe9b8e55427d031c"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.344501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.346369 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.846346159 +0000 UTC m=+144.222694880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.347453 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.349574 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.353557 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"180a17087e5768211228d0870db72849e32e0f0b64edb8e0c0bb27fc622985fd"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.356787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerStarted","Data":"531ee6088e028abeb40db4014fff58f47925cdba0b3674ddf9755268d1aa83d4"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.374856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerStarted","Data":"b1fdab80b8055470789558626b94e6fd689f065930bcfe2c60fd34eb94175732"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.387960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" event={"ID":"2eef055f-7504-4f20-817e-afcd1bb6f996","Type":"ContainerStarted","Data":"0b7a93dfca8f26fe9e5bcd5451d038490ecf6dc08547a7e40f201c676ec90cbf"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.396296 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njwjd" event={"ID":"d24bd6df-1e79-4e8b-a71a-c3f07422af23","Type":"ContainerStarted","Data":"ea721113a767da5df98377a8b7b1c9131ab6acd95e443433dc8ddf40c1ddcff7"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.399679 4740 csr.go:261] certificate signing request csr-hzktw is approved, waiting to be issued Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.407337 4740 csr.go:257] certificate signing request csr-hzktw is issued Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.408265 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"c81874774b2ee0e8238fef38e4c4e7b42c77ef5f86a5d2fbd6296d79429462c7"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.414347 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerStarted","Data":"f1ef8cd25203286dbdfc9bf2821b36ea7c337028ae47c74de1d507f1ce39bfdd"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.445342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"5b08a917ae64f2aebd3511c9f67742e608bba4dfa7092c9189fe790a79fbf887"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.446355 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"f3be1ce6f3d9c7ed9896d86c3574154347e66b8924db78345f6a89c465870131"} Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.447795 4740 patch_prober.go:28] interesting pod/console-operator-58897d9998-c86mj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.447891 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podUID="493225bc-7119-4eec-9314-aa63e475d061" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.448760 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.448845 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.450683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.451063 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:16.951051394 +0000 UTC m=+144.327400105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.494991 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.557467 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.558100 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.058073511 +0000 UTC m=+144.434422222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.558228 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.561337 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.061305403 +0000 UTC m=+144.437654154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.656501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fhjt"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.661512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.661670 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.16163349 +0000 UTC m=+144.537982211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.661759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.662082 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.162070414 +0000 UTC m=+144.538419135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.788561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.790406 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.29037622 +0000 UTC m=+144.666724941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.799483 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-m9529" podStartSLOduration=123.799466357 podStartE2EDuration="2m3.799466357s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.798751634 +0000 UTC m=+144.175100375" watchObservedRunningTime="2026-02-16 12:55:16.799466357 +0000 UTC m=+144.175815078" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.852645 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.882331 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-28sp5"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.901384 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-c86mj" podStartSLOduration=123.901270511 podStartE2EDuration="2m3.901270511s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:16.893327211 +0000 UTC m=+144.269675932" watchObservedRunningTime="2026-02-16 12:55:16.901270511 +0000 UTC m=+144.277619232" Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.901664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:16 crc kubenswrapper[4740]: E0216 12:55:16.902296 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.402278542 +0000 UTC m=+144.778627253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.911079 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5"] Feb 16 12:55:16 crc kubenswrapper[4740]: I0216 12:55:16.922758 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h95tf"] Feb 16 12:55:16 crc kubenswrapper[4740]: W0216 12:55:16.976163 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2b33f9_514f_48f7_ae7f_23bb3ea0fab5.slice/crio-36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210 WatchSource:0}: Error finding container 36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210: Status 404 returned error can't find the container with id 36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210 Feb 16 12:55:17 crc kubenswrapper[4740]: W0216 12:55:17.002212 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4e839d_cd94_49e9_a386_e90820fceb5c.slice/crio-b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a WatchSource:0}: Error finding container b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a: Status 404 returned error can't find the container with id b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.003035 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.003313 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.503299441 +0000 UTC m=+144.879648162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.030209 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-65j55" podStartSLOduration=124.030190088 podStartE2EDuration="2m4.030190088s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.029716283 +0000 UTC m=+144.406065014" watchObservedRunningTime="2026-02-16 12:55:17.030190088 +0000 UTC m=+144.406538809" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.034308 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.042522 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.083640 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podStartSLOduration=124.083621179 podStartE2EDuration="2m4.083621179s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.078025923 +0000 UTC m=+144.454374644" watchObservedRunningTime="2026-02-16 12:55:17.083621179 +0000 UTC m=+144.459969900" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.104181 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.104578 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.604559017 +0000 UTC m=+144.980907738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.106958 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2wzdf"] Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.206507 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.207362 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.707332901 +0000 UTC m=+145.083681692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.317714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.318196 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.81817909 +0000 UTC m=+145.194527821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.408901 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 12:50:16 +0000 UTC, rotation deadline is 2026-11-13 02:58:08.916460789 +0000 UTC Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.408947 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6470h2m51.507515503s for next certificate rotation Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.419455 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.419781 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:17.919764017 +0000 UTC m=+145.296112738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.422993 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podStartSLOduration=123.422974827 podStartE2EDuration="2m3.422974827s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.38874197 +0000 UTC m=+144.765090691" watchObservedRunningTime="2026-02-16 12:55:17.422974827 +0000 UTC m=+144.799323548" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.480546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerStarted","Data":"ecc39d12cb6ac857f193b234c0c65095915f019fb5a183124161212d668749a6"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.491560 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" event={"ID":"b2f50997-a877-4d3f-9cf7-df6d254b48f5","Type":"ContainerStarted","Data":"cb4ef268814cfefbb0b70b06d1836ad6b9484454163f78d47f7245335c7a53ca"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.516948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" event={"ID":"6f465ee4-90ff-4746-a90f-1e964b6c4d05","Type":"ContainerStarted","Data":"f9f5b54a45b720edc8c5ca94b67395f345595fed5ad18ec157c5d42a1b3e65c5"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.517892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.521728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.526629 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" event={"ID":"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05","Type":"ContainerStarted","Data":"73810e418d9cf6f8bb87b8f398e00f1d17c609cfccb31bb5b8db9e59d5e96872"} Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.527019 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.02697442 +0000 UTC m=+145.403323141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.534544 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.534618 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.541692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" event={"ID":"456feb2b-91a3-42ae-aa03-accd55804c79","Type":"ContainerStarted","Data":"6942960f514cd783c283b57928f166c7de162a4cc005908788a8ee2b12e3ec0a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.550995 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podStartSLOduration=124.550974936 podStartE2EDuration="2m4.550974936s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.550835011 +0000 UTC m=+144.927183732" watchObservedRunningTime="2026-02-16 12:55:17.550974936 +0000 UTC m=+144.927323657" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.551093 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" podStartSLOduration=123.551087769 podStartE2EDuration="2m3.551087769s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.41607418 +0000 UTC m=+144.792422901" watchObservedRunningTime="2026-02-16 12:55:17.551087769 +0000 UTC m=+144.927436490" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.554879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-njwjd" event={"ID":"d24bd6df-1e79-4e8b-a71a-c3f07422af23","Type":"ContainerStarted","Data":"3aa996d0fd9eb2ec7be7e9cb35d869392afb480c606c3ee19bdabfb96cb7526a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.570064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"28ef1c4b8d37926ac8c9404b2dae03f9587cde7c7fb460d7b8a838368374d4ff"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.577291 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-njwjd" podStartSLOduration=5.577278013 podStartE2EDuration="5.577278013s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.576122737 +0000 UTC m=+144.952471458" watchObservedRunningTime="2026-02-16 12:55:17.577278013 +0000 UTC m=+144.953626734" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.578520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerStarted","Data":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.591619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"b84e206f7aa5f18786105ba9a965066d36bd598845bcefc5eca4a60ffdcf596a"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.598358 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" event={"ID":"980ab133-4d29-4d9e-b359-bf3cb06fbba3","Type":"ContainerStarted","Data":"f6c34729ed10f79e25b77bea57a32569be8e5e2fcd7e619d50b2f8e4ba1408cf"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.623848 4740 generic.go:334] "Generic (PLEG): container finished" podID="fb14491a-6043-446a-8b10-626838253345" containerID="c6e65c2ba2debad5f6a920c907574927f8de52209e2bd01b05839d0731d3b342" exitCode=0 Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.623929 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerDied","Data":"c6e65c2ba2debad5f6a920c907574927f8de52209e2bd01b05839d0731d3b342"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.627147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fhjt" event={"ID":"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5","Type":"ContainerStarted","Data":"36fad69280f11328a40170099676861e33361cee30c5232d1efb9f1f28494210"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.629785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerStarted","Data":"fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.630203 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.637230 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.637998 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.137980113 +0000 UTC m=+145.514328834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.645616 4740 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wknn7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.645679 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.656717 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ljpv" podStartSLOduration=124.656698572 podStartE2EDuration="2m4.656698572s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.656490446 +0000 UTC m=+145.032839177" watchObservedRunningTime="2026-02-16 12:55:17.656698572 +0000 UTC m=+145.033047293" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.660290 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gctsd" podStartSLOduration=124.660275105 podStartE2EDuration="2m4.660275105s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.609749715 +0000 UTC m=+144.986098446" watchObservedRunningTime="2026-02-16 12:55:17.660275105 +0000 UTC m=+145.036623846" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.684581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" event={"ID":"74217d18-e17c-469b-a492-49b62f2f96c9","Type":"ContainerStarted","Data":"8d3f6be83326cb04efddae2a209cc0b4c7ea1232724a6b3473d0c6a5bf6bbacb"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.686003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" event={"ID":"a7393aab-0211-49f3-b683-3cf11cae93c6","Type":"ContainerStarted","Data":"769f1c896eee2f74da4cb78d42864c5f8c6eb5b19197b846ba5885094e3969a9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.694235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"100d57dc7ecd48fee78fab77cd32f563620ce9818c4928895fcf1804af8c43c7"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.706901 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podStartSLOduration=124.706884231 podStartE2EDuration="2m4.706884231s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.687478551 +0000 UTC m=+145.063827272" watchObservedRunningTime="2026-02-16 12:55:17.706884231 +0000 UTC m=+145.083232952" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.712480 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" event={"ID":"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc","Type":"ContainerStarted","Data":"b2bb3d6dc9008d4ab841be4341d24cf81e469f03626b812f600039e4181e2b18"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.715563 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" event={"ID":"d2b43cb6-05b8-4834-b187-1377370007fd","Type":"ContainerStarted","Data":"b41252e4259e9f761e0c3f7095d52fcec89ac59940ff32d24b9efc59b9a7ba67"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.717744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"cfcbdb0ab4462d08f780e84f9512575a935e405f5c083bc85a6432c96f6148f9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.720057 4740 generic.go:334] "Generic (PLEG): container finished" podID="a31d7595-0ee7-48b5-9f1f-19907ed7c92b" containerID="d52bd0521c30dceb0141fcf325544230c0b46a352b31c82060df558ce08f1b73" exitCode=0 Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.720189 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerDied","Data":"d52bd0521c30dceb0141fcf325544230c0b46a352b31c82060df558ce08f1b73"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.733472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"68a11b47359f227cff050b9d48012ed1cc90c8a0cad7c3648191cfff5c56bad8"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.738842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.745175 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.245159045 +0000 UTC m=+145.621507856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.745324 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerStarted","Data":"065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.749484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"44043d9a540cbbc81a62d66d948f30b96c27ced178ccc7286417a0c9b3be3ac9"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.751496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"2580205133e84b6dfdbdc6e20eb3dd15d33dce11e68b539ed9dc6b95cafc2946"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.756076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"c086828261692696cd9c3d38eddc09e53a1f01e8812dafd59d7b001dab3cd8ab"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.760336 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" event={"ID":"f9fc47c4-e9bd-46bb-b1c9-c8f6b2d25fca","Type":"ContainerStarted","Data":"cee82cebfd010a21105291ca298b4bb9794db404715ef60484f7ca1333eccef6"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.763171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"2be139f079fcc5649706872bbbbd65a294d2ecbb09b125ad1cf822b03bf022a4"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.767780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"84c079b8365359cc583de9fa06c17cb129640acde6ffce26f6304b0ce3abc59b"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" event={"ID":"643bf47c-570f-4204-adb1-512cd9e914b8","Type":"ContainerStarted","Data":"87eccfacfa87633e077da2cdf279089b724f7a765c9e6627c4f0c23a551ed4ed"} Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774622 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.774661 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.778703 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wrjdd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.778775 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.785832 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-c86mj" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.788514 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hxhv8" podStartSLOduration=123.78850293 podStartE2EDuration="2m3.78850293s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.786481296 +0000 UTC m=+145.162830017" watchObservedRunningTime="2026-02-16 12:55:17.78850293 +0000 UTC m=+145.164851651" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.841756 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rh4w9" podStartSLOduration=124.841717064 podStartE2EDuration="2m4.841717064s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.837277834 +0000 UTC m=+145.213626565" watchObservedRunningTime="2026-02-16 12:55:17.841717064 +0000 UTC m=+145.218065785" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.849063 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.853611 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.353559677 +0000 UTC m=+145.729908588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.865003 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jcv2d" podStartSLOduration=124.864983057 podStartE2EDuration="2m4.864983057s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:17.86190737 +0000 UTC m=+145.238256111" watchObservedRunningTime="2026-02-16 12:55:17.864983057 +0000 UTC m=+145.241331778" Feb 16 12:55:17 crc kubenswrapper[4740]: I0216 12:55:17.953322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:17 crc kubenswrapper[4740]: E0216 12:55:17.953825 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.453787511 +0000 UTC m=+145.830136222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.055029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.055265 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.555226393 +0000 UTC m=+145.931575114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.055391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.055735 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.555721698 +0000 UTC m=+145.932070419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.109564 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.109619 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.115942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.156275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.156471 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.656441917 +0000 UTC m=+146.032790628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.156667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.157051 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.657035286 +0000 UTC m=+146.033384007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.207594 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.257473 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.258126 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:18.758109807 +0000 UTC m=+146.134458528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.589139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.589473 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.089459482 +0000 UTC m=+146.465808203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.689769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.690088 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.190067648 +0000 UTC m=+146.566416369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.790731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.791268 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.291249092 +0000 UTC m=+146.667597893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.841227 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" event={"ID":"f0712ca8-87a5-42bb-8d1f-f9cd9fa0b6dc","Type":"ContainerStarted","Data":"d8fb40d38a679c14d8c7a0d1c7f7d5145f7d8310cc80aba4b42885fea852ceaf"} Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.859136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" event={"ID":"2dc85ee1-e9d1-4d68-b953-30d83f8e7aef","Type":"ContainerStarted","Data":"3dfb524b29fc7fff49015d3059e0907a6380cf021a014abd770928395271829e"} Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.895747 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:18 crc kubenswrapper[4740]: E0216 12:55:18.896951 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.396932898 +0000 UTC m=+146.773281619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.900704 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s69f4" podStartSLOduration=125.900680106 podStartE2EDuration="2m5.900680106s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:18.894902985 +0000 UTC m=+146.271251706" watchObservedRunningTime="2026-02-16 12:55:18.900680106 +0000 UTC m=+146.277028827" Feb 16 12:55:18 crc kubenswrapper[4740]: I0216 12:55:18.920710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" event={"ID":"bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05","Type":"ContainerStarted","Data":"be068403ca2985a0086f52006a6f6ae30b0ce04827de4f4d9b576d48df977318"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.014117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.014466 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.514448176 +0000 UTC m=+146.890796897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.067462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-42rhd" event={"ID":"91338fe1-147f-41ff-9816-8cdcb7d1a08b","Type":"ContainerStarted","Data":"51cbd64af26c6cd3b28733e82546044f0bb307d276befb74ce466be5112e64cc"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.179059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.179635 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.679602004 +0000 UTC m=+147.055950775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.193467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" event={"ID":"d5f0e5d1-897e-4200-8ea7-716faf71db56","Type":"ContainerStarted","Data":"b8bf5a7d868dd4c7b60b1cecd7bdc8eae541071a50ff7921610cae0fcb85aa38"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.193683 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.218484 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-42rhd" podStartSLOduration=126.218461076 podStartE2EDuration="2m6.218461076s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.208192653 +0000 UTC m=+146.584541374" watchObservedRunningTime="2026-02-16 12:55:19.218461076 +0000 UTC m=+146.594809807" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.220953 4740 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5cgnk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.221000 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podUID="d5f0e5d1-897e-4200-8ea7-716faf71db56" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.225164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" event={"ID":"a7393aab-0211-49f3-b683-3cf11cae93c6","Type":"ContainerStarted","Data":"369f4df8a7332b82ba92f0d199f15846efbfce63af5960b3bd23043ea709ed0d"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.271996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"eb97e6818b8a59913c88c8610bf13abf0558b69530ee76868996e1d9ac11ad88"} Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.274249 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.274283 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.284207 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gbsbz" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.284488 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podStartSLOduration=126.284474693 podStartE2EDuration="2m6.284474693s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.239933082 +0000 UTC m=+146.616281803" watchObservedRunningTime="2026-02-16 12:55:19.284474693 +0000 UTC m=+146.660823414" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.295850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.296651 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nrwzx" podStartSLOduration=126.296631626 podStartE2EDuration="2m6.296631626s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:19.284404711 +0000 UTC m=+146.660753442" watchObservedRunningTime="2026-02-16 12:55:19.296631626 +0000 UTC m=+146.672980347" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.297487 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.797464122 +0000 UTC m=+147.173812843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.402973 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.403154 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.903118877 +0000 UTC m=+147.279467598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.403431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.403753 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:19.903739507 +0000 UTC m=+147.280088228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.504435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.504649 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.004622301 +0000 UTC m=+147.380971042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.504778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.505115 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.005108197 +0000 UTC m=+147.381456918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.556180 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.557893 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.557939 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.607463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.607636 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.107607382 +0000 UTC m=+147.483956113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.607824 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.608199 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.10819009 +0000 UTC m=+147.484538811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.708582 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.708783 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.208765485 +0000 UTC m=+147.585114206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.708901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.709207 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.209183408 +0000 UTC m=+147.585532129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.809849 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.810002 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.30997901 +0000 UTC m=+147.686327731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.810123 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.810530 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.310514637 +0000 UTC m=+147.686863358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.911830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.912035 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.41200064 +0000 UTC m=+147.788349361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:19 crc kubenswrapper[4740]: I0216 12:55:19.912235 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:19 crc kubenswrapper[4740]: E0216 12:55:19.912643 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.41262216 +0000 UTC m=+147.788970881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.013037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.013245 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.513224475 +0000 UTC m=+147.889573196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.013353 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.013746 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.513729341 +0000 UTC m=+147.890078062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.114653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.114834 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.614791192 +0000 UTC m=+147.991139913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.114936 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.115274 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.615265437 +0000 UTC m=+147.991614218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.216193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.216397 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.716375528 +0000 UTC m=+148.092724259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.276238 4740 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wknn7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.276296 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.278774 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"0770376af748569dc44c83607198ba831306735fa97014ab28fe87fe9f35ff8f"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.280607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" event={"ID":"2eef055f-7504-4f20-817e-afcd1bb6f996","Type":"ContainerStarted","Data":"5590859155e4a72f7b794ac007f2f81df6d73462563833e5fa89a2166df0d5ea"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.282692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"9f28ea3efe4f9888833c94d49676ec60f141bf112df88fcd72e429b982fb05a5"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.284443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" event={"ID":"d2b43cb6-05b8-4834-b187-1377370007fd","Type":"ContainerStarted","Data":"5d31c8fa3186311f29b5a5a2d93f4740823beba4efaf99b7c56d10e0e6aec73e"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.285712 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerStarted","Data":"907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.287165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"c256af9b499807e274a6149a715efcda6dd0650f10d5231582d5cdf2cc5f554a"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.288733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"c681901f30ae852bdbf12db537784e62b67ddfc527ee5933a9855cd7631ef4a6"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.289775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" event={"ID":"f3fad4e9-9b8b-4c49-ad3d-9c80525475fc","Type":"ContainerStarted","Data":"cd3b3910d4d1170d56096d29ee738829002b152c4cfd46783e7362c2f80ca7f2"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.290827 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"f509a9cafab782b7f4d2f8571a59e797cdca574f195426eccfaf8781c9c46ad1"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fhjt" event={"ID":"0a2b33f9-514f-48f7-ae7f-23bb3ea0fab5","Type":"ContainerStarted","Data":"9a28ca22f31a0524bc58a608850510c8e85d2d1fd1c1bdaab7bf275de915fc85"} Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292853 4740 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5cgnk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292892 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" podUID="d5f0e5d1-897e-4200-8ea7-716faf71db56" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.292950 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bl8xk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.293005 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" podUID="6f465ee4-90ff-4746-a90f-1e964b6c4d05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.314544 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m9krp" podStartSLOduration=127.314525937 podStartE2EDuration="2m7.314525937s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.302424786 +0000 UTC m=+147.678773547" watchObservedRunningTime="2026-02-16 12:55:20.314525937 +0000 UTC m=+147.690874668" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.318918 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.320835 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.820795373 +0000 UTC m=+148.197144215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.332612 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5fhjt" podStartSLOduration=8.332592335 podStartE2EDuration="8.332592335s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.315135775 +0000 UTC m=+147.691484496" watchObservedRunningTime="2026-02-16 12:55:20.332592335 +0000 UTC m=+147.708941056" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.350188 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podStartSLOduration=126.350166368 podStartE2EDuration="2m6.350166368s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.335029031 +0000 UTC m=+147.711377752" watchObservedRunningTime="2026-02-16 12:55:20.350166368 +0000 UTC m=+147.726515089" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.350461 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-272mp" podStartSLOduration=127.350457537 podStartE2EDuration="2m7.350457537s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:20.346647147 +0000 UTC m=+147.722995868" watchObservedRunningTime="2026-02-16 12:55:20.350457537 +0000 UTC m=+147.726806258" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.422996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.423231 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.923197836 +0000 UTC m=+148.299546567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.423339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.423672 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:20.923658291 +0000 UTC m=+148.300007022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.523989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.524197 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.024156883 +0000 UTC m=+148.400505604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.524592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.524907 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.024894406 +0000 UTC m=+148.401243127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.557530 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.557623 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.625463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.625616 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.125587984 +0000 UTC m=+148.501936715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.625710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.626095 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.12608325 +0000 UTC m=+148.502431991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.726169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.726503 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.226487639 +0000 UTC m=+148.602836360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.827617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.828248 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.32819939 +0000 UTC m=+148.704548151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:20 crc kubenswrapper[4740]: I0216 12:55:20.928901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:20 crc kubenswrapper[4740]: E0216 12:55:20.929208 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.429194148 +0000 UTC m=+148.805542869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.029718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.030165 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.530150905 +0000 UTC m=+148.906499646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.131097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.131293 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.631271467 +0000 UTC m=+149.007620188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.131391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.131731 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.631718582 +0000 UTC m=+149.008067313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.232741 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.233011 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.732981017 +0000 UTC m=+149.109329748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.297547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" event={"ID":"456feb2b-91a3-42ae-aa03-accd55804c79","Type":"ContainerStarted","Data":"d631035ed32d16c29ba6b60f63d2571696f4507a64bd2022685e105cba31324e"} Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.299175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerStarted","Data":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.299642 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.301624 4740 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n92bx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.301693 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podUID="bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.320383 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nq5df" podStartSLOduration=128.320365177 podStartE2EDuration="2m8.320365177s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:21.320232843 +0000 UTC m=+148.696581584" watchObservedRunningTime="2026-02-16 12:55:21.320365177 +0000 UTC m=+148.696713898" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.334673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.335167 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.835151523 +0000 UTC m=+149.211500244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.340447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.340517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.436549 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.936512523 +0000 UTC m=+149.312861244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.436719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.437050 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:21.937037859 +0000 UTC m=+149.313386580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.438678 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.442495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.537185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.537386 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.037360256 +0000 UTC m=+149.413708977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.537449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.537738 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.037726778 +0000 UTC m=+149.414075499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.557918 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.557983 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.601221 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.609650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.618234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.638668 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.638976 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.1388968 +0000 UTC m=+149.515245581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.639226 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.639616 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.139598413 +0000 UTC m=+149.515947134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.741214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.741578 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.241544051 +0000 UTC m=+149.617892772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.742046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.742441 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.242428189 +0000 UTC m=+149.618776910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.842679 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.842956 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.342916531 +0000 UTC m=+149.719265252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.843039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.843310 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.343298082 +0000 UTC m=+149.719646803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: W0216 12:55:21.892666 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a WatchSource:0}: Error finding container cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a: Status 404 returned error can't find the container with id cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.944151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.944373 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.444319291 +0000 UTC m=+149.820668012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:21 crc kubenswrapper[4740]: I0216 12:55:21.944503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:21 crc kubenswrapper[4740]: E0216 12:55:21.944863 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.444846448 +0000 UTC m=+149.821195179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.044881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.045280 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.545264008 +0000 UTC m=+149.921612719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.146012 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.146408 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.64639162 +0000 UTC m=+150.022740341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.247369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.248900 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.747953826 +0000 UTC m=+150.124302547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.307107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"ba16b7ac64d657254c6e5487def1cf620c63ec9030f4923398d92fdf686609b2"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.308394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3b394bd1c25334140785d24738c9ec1573944f38a755ba7184b1706ca94c7583"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.310503 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" event={"ID":"8f4e839d-cd94-49e9-a386-e90820fceb5c","Type":"ContainerStarted","Data":"f6a136832741c73fca43a66086c3e6e1bcc2dae9db6224dd52bba1ff367bb413"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.314801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" event={"ID":"1a760deb-c84d-4da0-a20b-dac7b17c24c7","Type":"ContainerStarted","Data":"e6cbeb6553c8b7d149e2a67552985a59c4e9dfcf44d3f750ced02d45877f1995"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.325410 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" event={"ID":"ad6fcf0b-0176-4920-93de-563a8f4af054","Type":"ContainerStarted","Data":"368b2d548f2abb6b56e7c310da90fc0e58dbfe8f69c244cfd0f7890620d5e9de"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.330437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" event={"ID":"382ac2b0-b15a-412a-b8fb-e61844137cb1","Type":"ContainerStarted","Data":"b03798c0c5efa6a3a7b1ecb3eeba2cb565c15f381a550123fb8ac114e2645b85"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.332247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cc56be54d9521ce488d80c743d09d9503a8888a571428980849610fe586f376a"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.338626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"93c8024145c4e34e2dad8468d0651498513108aefbb9f3c19ebff516278ee557"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.340305 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7bdwm" podStartSLOduration=129.340289032 podStartE2EDuration="2m9.340289032s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.329970017 +0000 UTC m=+149.706318758" watchObservedRunningTime="2026-02-16 12:55:22.340289032 +0000 UTC m=+149.716637753" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.344263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" event={"ID":"ad3a4715-2249-418d-b03e-bd5aac43089e","Type":"ContainerStarted","Data":"598c6514a53180b8bda5b700535a63d070b723b5e95339dbb596c4cf3e4048f0"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.349139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.350366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7t4lr" podStartSLOduration=129.350347118 podStartE2EDuration="2m9.350347118s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.347347924 +0000 UTC m=+149.723696645" watchObservedRunningTime="2026-02-16 12:55:22.350347118 +0000 UTC m=+149.726695839" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.350869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.850851864 +0000 UTC m=+150.227200635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.357423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" event={"ID":"a31d7595-0ee7-48b5-9f1f-19907ed7c92b","Type":"ContainerStarted","Data":"1364cab4bea3b40917f030c07855f94fd0c250ea5ac801484a545e6e2e82dd41"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.358191 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.360150 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" event={"ID":"b2f50997-a877-4d3f-9cf7-df6d254b48f5","Type":"ContainerStarted","Data":"90443c3caf5da9529e8f128ee1bc74a1d9926fa339f2d7cada2a5ce3a2212d7d"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.362887 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" event={"ID":"28956c81-f1c4-471c-9564-5747a0a0aaf8","Type":"ContainerStarted","Data":"1c165460650228d07ce223c82716ef0f846094a0efda8dd9a566d265e70b51fb"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.388964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"32c7495fbb3a8bc10cf5476cff3cb9407e5d2225803d07701acc111c5fea3dfd"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.390096 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9h8v8" podStartSLOduration=129.390076318 podStartE2EDuration="2m9.390076318s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.373633331 +0000 UTC m=+149.749982052" watchObservedRunningTime="2026-02-16 12:55:22.390076318 +0000 UTC m=+149.766425039" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.392517 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hkbdj" podStartSLOduration=129.392499394 podStartE2EDuration="2m9.392499394s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.388899882 +0000 UTC m=+149.765248613" watchObservedRunningTime="2026-02-16 12:55:22.392499394 +0000 UTC m=+149.768848115" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" event={"ID":"92938f98-5bd3-49e2-be2d-65b0fd5d0c12","Type":"ContainerStarted","Data":"a0b23a96a96a2bcc1b147595563e360326c0e4208e6606154b0e6327c9a84579"} Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411307 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" podStartSLOduration=129.411294347 podStartE2EDuration="2m9.411294347s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.411160572 +0000 UTC m=+149.787509303" watchObservedRunningTime="2026-02-16 12:55:22.411294347 +0000 UTC m=+149.787643068" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411509 4740 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n92bx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.411552 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" podUID="bd1e1fc5-4b5d-4d86-b543-e5e46c2b4c05" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.437962 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5tlhr" podStartSLOduration=129.437942965 podStartE2EDuration="2m9.437942965s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.43586635 +0000 UTC m=+149.812215071" watchObservedRunningTime="2026-02-16 12:55:22.437942965 +0000 UTC m=+149.814291696" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.450234 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.450542 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.950514411 +0000 UTC m=+150.326863142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.450666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.452286 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:22.952276236 +0000 UTC m=+150.328624957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.461544 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q8lfc" podStartSLOduration=129.461528837 podStartE2EDuration="2m9.461528837s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.459795352 +0000 UTC m=+149.836144063" watchObservedRunningTime="2026-02-16 12:55:22.461528837 +0000 UTC m=+149.837877558" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.484551 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8dt95" podStartSLOduration=129.4845316 podStartE2EDuration="2m9.4845316s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.482544188 +0000 UTC m=+149.858892909" watchObservedRunningTime="2026-02-16 12:55:22.4845316 +0000 UTC m=+149.860880321" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.551429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.551731 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.051717734 +0000 UTC m=+150.428066455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.560170 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.560236 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.591708 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2wzdf" podStartSLOduration=128.591689763 podStartE2EDuration="2m8.591689763s" podCreationTimestamp="2026-02-16 12:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.591052592 +0000 UTC m=+149.967401323" watchObservedRunningTime="2026-02-16 12:55:22.591689763 +0000 UTC m=+149.968038494" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.594477 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" podStartSLOduration=129.59445964 podStartE2EDuration="2m9.59445964s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.55092382 +0000 UTC m=+149.927272541" watchObservedRunningTime="2026-02-16 12:55:22.59445964 +0000 UTC m=+149.970808371" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.646012 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzjq5" podStartSLOduration=129.645986271 podStartE2EDuration="2m9.645986271s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.639192187 +0000 UTC m=+150.015540918" watchObservedRunningTime="2026-02-16 12:55:22.645986271 +0000 UTC m=+150.022334992" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.674029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.674445 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.174431617 +0000 UTC m=+150.550780338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.692339 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" podStartSLOduration=129.692314269 podStartE2EDuration="2m9.692314269s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.674782477 +0000 UTC m=+150.051131198" watchObservedRunningTime="2026-02-16 12:55:22.692314269 +0000 UTC m=+150.068662990" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.707063 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" podStartSLOduration=129.707043603 podStartE2EDuration="2m9.707043603s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:22.70473585 +0000 UTC m=+150.081084581" watchObservedRunningTime="2026-02-16 12:55:22.707043603 +0000 UTC m=+150.083392324" Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.776318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.776409 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.276379654 +0000 UTC m=+150.652728375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.776844 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.777367 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.277350615 +0000 UTC m=+150.653699336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.878047 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.878186 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.378159507 +0000 UTC m=+150.754508228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.878319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.878623 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.378611802 +0000 UTC m=+150.754960523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:22 crc kubenswrapper[4740]: I0216 12:55:22.979551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:22 crc kubenswrapper[4740]: E0216 12:55:22.980014 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.479995341 +0000 UTC m=+150.856344062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.080902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.081376 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.581357001 +0000 UTC m=+150.957705722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.182439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.182626 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.682595697 +0000 UTC m=+151.058944418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.182752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.183094 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.683084792 +0000 UTC m=+151.059433513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.283623 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.283799 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.783776771 +0000 UTC m=+151.160125492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.284211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.284532 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.784521875 +0000 UTC m=+151.160870596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.314491 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.384794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.384933 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.884904043 +0000 UTC m=+151.261252764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.385180 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.385610 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.885599745 +0000 UTC m=+151.261948516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.388410 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.388468 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.389025 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-m9529 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.389088 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m9529" podUID="91631c8c-d18f-44d6-9919-0b5fe8e8d45b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.460718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"ab14e6f8c8ce35738b50a2944ef3524e3a9a5c7fca335b701a0d336d0a066778"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.463120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-28sp5" event={"ID":"d749a00f-d84b-49ab-b7fb-4a6bc44d2c7e","Type":"ContainerStarted","Data":"8afc2c22702047e8d86b6d846960e6f89991e6b27189b2eb9d166331a9c09da9"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.463860 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.475060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2ec09428dfb3b7935771595455f5922604b9bbf1999e6e4251c26536adbabf35"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.486418 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.487577 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:23.987550393 +0000 UTC m=+151.363899114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.488246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9ccdf7ef0ceff951fcead1563ceb99ab36bb27ca0443dd99de7c03b02e42f980"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.493608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0893981b224d46d2d6220c3481bba88b7b55849a1f366208fe26b48b38a15a3f"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.494310 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.501177 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-28sp5" podStartSLOduration=11.501159031 podStartE2EDuration="11.501159031s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:23.497519027 +0000 UTC m=+150.873867758" watchObservedRunningTime="2026-02-16 12:55:23.501159031 +0000 UTC m=+150.877507752" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.504934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" event={"ID":"fb14491a-6043-446a-8b10-626838253345","Type":"ContainerStarted","Data":"03a30ba0344dc0500c6ab61eadacc3ebfef5c5d5f676b6d4f1bbf47773b0cc10"} Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.506951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.567132 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:23 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:23 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:23 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.567202 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.588518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.591000 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.090982788 +0000 UTC m=+151.467331569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.690842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.691098 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.191075057 +0000 UTC m=+151.567423778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.721317 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" podStartSLOduration=130.721288408 podStartE2EDuration="2m10.721288408s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:23.721095142 +0000 UTC m=+151.097443863" watchObservedRunningTime="2026-02-16 12:55:23.721288408 +0000 UTC m=+151.097637139" Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.793935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.794685 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.294669787 +0000 UTC m=+151.671018508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.895415 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.895565 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.395542451 +0000 UTC m=+151.771891182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.895639 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.895945 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.395936164 +0000 UTC m=+151.772284885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.996880 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.997005 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.496981523 +0000 UTC m=+151.873330244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:23 crc kubenswrapper[4740]: I0216 12:55:23.997261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:23 crc kubenswrapper[4740]: E0216 12:55:23.997594 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.497583722 +0000 UTC m=+151.873932443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.098835 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.099184 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.599165939 +0000 UTC m=+151.975514660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.200033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.200534 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.700495767 +0000 UTC m=+152.076844498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.301652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.301871 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.801840066 +0000 UTC m=+152.178188797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.302017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.302401 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.802380763 +0000 UTC m=+152.178729484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.403164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.403433 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.903407533 +0000 UTC m=+152.279756254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.403792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.404157 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:24.904142716 +0000 UTC m=+152.280491437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.427109 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.427986 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.432203 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.446548 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.504967 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.505029 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.505125 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.005110493 +0000 UTC m=+152.381459214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.523269 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"0e332639c4d13b2001f46e6085d2af79175e59c327a5d639b854fce6d41db658"} Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.526929 4740 generic.go:334] "Generic (PLEG): container finished" podID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerID="907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae" exitCode=0 Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.527320 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerDied","Data":"907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae"} Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.574346 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:24 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:24 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:24 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.574400 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.596880 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.598024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.601888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607696 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.607986 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.608857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.609212 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.109195278 +0000 UTC m=+152.485544009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.610408 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.625350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.662583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"certified-operators-22crz\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712527 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.712783 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.212761698 +0000 UTC m=+152.589110429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.712985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.713047 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.713342 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.213331885 +0000 UTC m=+152.589680606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.788248 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.797085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.798034 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.801573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.812658 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.813010 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.813049 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814918 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.814972 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.815431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.815498 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.31548221 +0000 UTC m=+152.691830931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.815693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.820083 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.879939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"community-operators-smtc5\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.891233 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.891286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.902038 4740 patch_prober.go:28] interesting pod/console-f9d7485db-gctsd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.902097 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.917070 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918372 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: I0216 12:55:24.918398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:24 crc kubenswrapper[4740]: E0216 12:55:24.920418 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.420406801 +0000 UTC m=+152.796755522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.004279 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.017741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025462 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.025709 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.027346 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.527330396 +0000 UTC m=+152.903679117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.028971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.032042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.035027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.049469 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cgnk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.058615 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bl8xk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.079514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"certified-operators-hzpc4\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.133836 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.134645 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135631 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.135706 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.136744 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.136971 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.137281 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.637270445 +0000 UTC m=+153.013619166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.148804 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.165208 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236520 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236835 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236927 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.236964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.237055 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.737039995 +0000 UTC m=+153.113388716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.237786 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.238236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.276641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"community-operators-z48zk\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.330269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.337992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.338393 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.838382124 +0000 UTC m=+153.214730845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.338574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.340111 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.354283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n92bx" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.397883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.399100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.439549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.439696 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.939674251 +0000 UTC m=+153.316022972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.440088 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.466061 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:25.966045991 +0000 UTC m=+153.342394712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.476763 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.494153 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.542474 4740 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.542689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"c13655b7aa4b36fa8580b7aa0221de76bb11751e6e0640802f813e9192709d55"} Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.547338 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.558963 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 12:55:26.058936784 +0000 UTC m=+153.435285515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.559051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: E0216 12:55:25.559408 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 12:55:26.059398949 +0000 UTC m=+153.435747670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lkjkp" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.559582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.564008 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:25 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.564050 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.573013 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.594914 4740 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T12:55:25.542721424Z","Handler":null,"Name":""} Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.599506 4740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.599568 4740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.669439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.700858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.770914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.773764 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.774154 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.827126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lkjkp\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.865683 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.890355 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972401 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972483 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.972517 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") pod \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\" (UID: \"9062ffdd-baa5-4ebc-8f40-353fac0e821e\") " Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.974029 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.989518 4740 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nqbws container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]log ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]etcd ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 12:55:25 crc kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 12:55:25 crc kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 12:55:25 crc kubenswrapper[4740]: livez check failed Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.989588 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" podUID="fb14491a-6043-446a-8b10-626838253345" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.991360 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm" (OuterVolumeSpecName: "kube-api-access-pm2wm") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "kube-api-access-pm2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.991936 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9062ffdd-baa5-4ebc-8f40-353fac0e821e" (UID: "9062ffdd-baa5-4ebc-8f40-353fac0e821e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:55:25 crc kubenswrapper[4740]: I0216 12:55:25.995401 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:25.999622 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.045500 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:55:26 crc kubenswrapper[4740]: W0216 12:55:26.064053 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod17e62feb_2b96_41e9_9060_492217efc502.slice/crio-88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac WatchSource:0}: Error finding container 88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac: Status 404 returned error can't find the container with id 88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075085 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2wm\" (UniqueName: \"kubernetes.io/projected/9062ffdd-baa5-4ebc-8f40-353fac0e821e-kube-api-access-pm2wm\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075149 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9062ffdd-baa5-4ebc-8f40-353fac0e821e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.075161 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9062ffdd-baa5-4ebc-8f40-353fac0e821e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.214976 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.389447 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:26 crc kubenswrapper[4740]: E0216 12:55:26.389986 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.389998 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.390126 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" containerName="collect-profiles" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.390852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.392547 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.398898 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.480496 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554033 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.554192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerStarted","Data":"a9c9a2711d899c1e5a260796e95114ebf5c80382d12c8808c0846487a96c8aa1"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556074 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556106 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556061 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.556149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"e2704b65ce01fba3c60e03244a825b4b8122c50b215c9372a0b6818fde2a82aa"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerStarted","Data":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerStarted","Data":"1104556d5cde5c0aa4a407502225880f615d1c9eedcf19e3ada6ce6e63d3b266"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.557773 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560622 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:26 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:26 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:26 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560666 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.560733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" event={"ID":"e89bf85c-b3ee-4ac6-a904-346cba8dbb49","Type":"ContainerStarted","Data":"5351383c72d73b024fe229e7406f0eb4fe7f7cafdebc643e104e79f1b492532a"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562204 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" event={"ID":"9062ffdd-baa5-4ebc-8f40-353fac0e821e","Type":"ContainerDied","Data":"065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562221 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.562243 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065139da6c6a631eff569a054a8bb03bcd168cb04767b6d3a09ccc0de2e57e23" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563654 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563716 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.563749 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"e130a9aace627f73e9efde47dbcd50406ac735047566ac4275095c2434589e89"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.565454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerStarted","Data":"e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.565497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerStarted","Data":"88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569499 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" exitCode=0 Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.569565 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerStarted","Data":"30342fb7e4ac42c29ddfbf6e245edd8370e6082c882f3ddc8fc68fa25e67ec8b"} Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582320 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582726 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.582788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.611042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"redhat-marketplace-tbqv5\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.680522 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h95tf" podStartSLOduration=14.680503037 podStartE2EDuration="14.680503037s" podCreationTimestamp="2026-02-16 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.675935813 +0000 UTC m=+154.052284544" watchObservedRunningTime="2026-02-16 12:55:26.680503037 +0000 UTC m=+154.056851758" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.713536 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.713518266 podStartE2EDuration="1.713518266s" podCreationTimestamp="2026-02-16 12:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.710455499 +0000 UTC m=+154.086804220" watchObservedRunningTime="2026-02-16 12:55:26.713518266 +0000 UTC m=+154.089866987" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.739695 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" podStartSLOduration=133.739674108 podStartE2EDuration="2m13.739674108s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:55:26.738367087 +0000 UTC m=+154.114715808" watchObservedRunningTime="2026-02-16 12:55:26.739674108 +0000 UTC m=+154.116022829" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.775757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.818198 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.824061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.837301 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886034 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886130 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.886164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.903156 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5mhmc" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.987884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.988581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:26 crc kubenswrapper[4740]: I0216 12:55:26.989087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.011693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"redhat-marketplace-gcgnl\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.145827 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.277510 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:55:27 crc kubenswrapper[4740]: W0216 12:55:27.288601 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd80862_652c_4fa2_a591_44a3cc76379d.slice/crio-f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb WatchSource:0}: Error finding container f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb: Status 404 returned error can't find the container with id f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.294432 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.337026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:55:27 crc kubenswrapper[4740]: W0216 12:55:27.358475 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80b641a_1e2b_4db3_9298_08042171a404.slice/crio-0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e WatchSource:0}: Error finding container 0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e: Status 404 returned error can't find the container with id 0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.561068 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:27 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:27 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:27 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.561442 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.578669 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.578930 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.579007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerStarted","Data":"0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.584527 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.584995 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.585023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerStarted","Data":"f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.592460 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.594732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.596946 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.601669 4740 generic.go:334] "Generic (PLEG): container finished" podID="17e62feb-2b96-41e9-9060-492217efc502" containerID="e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f" exitCode=0 Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.601776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerDied","Data":"e6477eced407d1d3ed07a0244a19704e287cf3d8859fe73a5389dfb5507f363f"} Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.608295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696137 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.696313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.797805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.798323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.798910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.818507 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"redhat-operators-lrlzg\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.924729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.991682 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.994149 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:27 crc kubenswrapper[4740]: I0216 12:55:27.997467 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.101929 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.101997 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.102225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.203934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.204325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.204368 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.205112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.205147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.232611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"redhat-operators-wwk79\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.334782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.446692 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:55:28 crc kubenswrapper[4740]: W0216 12:55:28.484340 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb4cf07f_4486_4ff8_88d3_b04296a09ece.slice/crio-bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707 WatchSource:0}: Error finding container bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707: Status 404 returned error can't find the container with id bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707 Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.560579 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:28 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:28 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:28 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.560627 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.596945 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.622100 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707"} Feb 16 12:55:28 crc kubenswrapper[4740]: I0216 12:55:28.951856 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") pod \"17e62feb-2b96-41e9-9060-492217efc502\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17e62feb-2b96-41e9-9060-492217efc502" (UID: "17e62feb-2b96-41e9-9060-492217efc502"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.036637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") pod \"17e62feb-2b96-41e9-9060-492217efc502\" (UID: \"17e62feb-2b96-41e9-9060-492217efc502\") " Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.037006 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e62feb-2b96-41e9-9060-492217efc502-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.042227 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17e62feb-2b96-41e9-9060-492217efc502" (UID: "17e62feb-2b96-41e9-9060-492217efc502"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.137838 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17e62feb-2b96-41e9-9060-492217efc502-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.559086 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:29 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:29 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:29 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.559158 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636804 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"17e62feb-2b96-41e9-9060-492217efc502","Type":"ContainerDied","Data":"88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636866 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f59ad790a5442a4051e17d6777b2fce1115022048d6ecc31423c1ed76e4bac" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.636914 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.642795 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" exitCode=0 Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.642867 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653201 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a" exitCode=0 Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653234 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.653259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"ce33be103aa47d28e69db79295eb5459d1dc46ee55c5e4d98d8d9854797067ed"} Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.808513 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:29 crc kubenswrapper[4740]: I0216 12:55:29.814026 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nqbws" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.558221 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:30 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:30 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:30 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.558289 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.854912 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:30 crc kubenswrapper[4740]: E0216 12:55:30.855159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.855174 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.855298 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e62feb-2b96-41e9-9060-492217efc502" containerName="pruner" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.868145 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.868237 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.873206 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.873433 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.969632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:30 crc kubenswrapper[4740]: I0216 12:55:30.970082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.071730 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.110191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.184025 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.567520 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:31 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Feb 16 12:55:31 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:31 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.567581 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:31 crc kubenswrapper[4740]: I0216 12:55:31.705507 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 12:55:32 crc kubenswrapper[4740]: I0216 12:55:32.558879 4740 patch_prober.go:28] interesting pod/router-default-5444994796-42rhd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 12:55:32 crc kubenswrapper[4740]: [+]has-synced ok Feb 16 12:55:32 crc kubenswrapper[4740]: [+]process-running ok Feb 16 12:55:32 crc kubenswrapper[4740]: healthz check failed Feb 16 12:55:32 crc kubenswrapper[4740]: I0216 12:55:32.558944 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-42rhd" podUID="91338fe1-147f-41ff-9816-8cdcb7d1a08b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.393364 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-m9529" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.450610 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-28sp5" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.578995 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:33 crc kubenswrapper[4740]: I0216 12:55:33.592339 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-42rhd" Feb 16 12:55:34 crc kubenswrapper[4740]: I0216 12:55:34.891595 4740 patch_prober.go:28] interesting pod/console-f9d7485db-gctsd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 12:55:34 crc kubenswrapper[4740]: I0216 12:55:34.891924 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.771920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.784917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12044a18-c0cd-4ce6-a1f8-45e3c10095fb-metrics-certs\") pod \"network-metrics-daemon-tcfzx\" (UID: \"12044a18-c0cd-4ce6-a1f8-45e3c10095fb\") " pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:35 crc kubenswrapper[4740]: I0216 12:55:35.800417 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcfzx" Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.880985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerStarted","Data":"249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8"} Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.895750 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:44 crc kubenswrapper[4740]: I0216 12:55:44.899682 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 12:55:45 crc kubenswrapper[4740]: I0216 12:55:45.575494 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:55:45 crc kubenswrapper[4740]: I0216 12:55:45.575857 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:55:46 crc kubenswrapper[4740]: I0216 12:55:46.001649 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:55:55 crc kubenswrapper[4740]: I0216 12:55:55.068115 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9j2xl" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.415571 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.416563 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m9jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wwk79_openshift-marketplace(fa69bf39-1ed0-42ba-91f9-c401e7fb9337): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:55:58 crc kubenswrapper[4740]: E0216 12:55:58.417884 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.183569 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.289089 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.289439 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfrxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-22crz_openshift-marketplace(70e65531-7cfb-415d-a0a7-25288c2cd5c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.290658 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.306394 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.306527 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xb4r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hzpc4_openshift-marketplace(44198116-006f-4be3-ad53-3d32576dd681): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.307632 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.358034 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.358199 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frh8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z48zk_openshift-marketplace(e9545e2f-e72f-4944-bc7a-ed9b052a34b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 12:56:00 crc kubenswrapper[4740]: E0216 12:56:00.359569 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.613505 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcfzx"] Feb 16 12:56:00 crc kubenswrapper[4740]: W0216 12:56:00.777974 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12044a18_c0cd_4ce6_a1f8_45e3c10095fb.slice/crio-bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5 WatchSource:0}: Error finding container bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5: Status 404 returned error can't find the container with id bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.963261 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.963302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.967053 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.967094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.975341 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" exitCode=0 Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.975417 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.986232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerStarted","Data":"2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.995274 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} Feb 16 12:56:00 crc kubenswrapper[4740]: I0216 12:56:00.998549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"bd9d88360005295a97932eeb33970cf20a8ffed4d32bf77e8df1bf26f84859e5"} Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.000495 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.001413 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" Feb 16 12:56:01 crc kubenswrapper[4740]: E0216 12:56:01.013515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" Feb 16 12:56:01 crc kubenswrapper[4740]: I0216 12:56:01.077546 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=31.077510058 podStartE2EDuration="31.077510058s" podCreationTimestamp="2026-02-16 12:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:01.077457296 +0000 UTC m=+188.453806027" watchObservedRunningTime="2026-02-16 12:56:01.077510058 +0000 UTC m=+188.453858789" Feb 16 12:56:01 crc kubenswrapper[4740]: I0216 12:56:01.606086 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.011542 4740 generic.go:334] "Generic (PLEG): container finished" podID="d95462a9-2f88-47a0-b230-2f824b38a575" containerID="2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f" exitCode=0 Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.011972 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerDied","Data":"2a7f51befb95e02874f85a5b08ab65f653237e20090a1b1673557a9896ac9f2f"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.015092 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" exitCode=0 Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.015151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.023727 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"af8b0fbc0f7d859e0a6e078815876aa26fd134fecbc94e0361122422e921af21"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.023787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcfzx" event={"ID":"12044a18-c0cd-4ce6-a1f8-45e3c10095fb","Type":"ContainerStarted","Data":"a244c39bf1f06ab492def06fe6a192fa4c5a20c6bf0a7fadf0a33a59cf7132ce"} Feb 16 12:56:02 crc kubenswrapper[4740]: I0216 12:56:02.049142 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tcfzx" podStartSLOduration=169.049119461 podStartE2EDuration="2m49.049119461s" podCreationTimestamp="2026-02-16 12:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:02.04462443 +0000 UTC m=+189.420973151" watchObservedRunningTime="2026-02-16 12:56:02.049119461 +0000 UTC m=+189.425468183" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.031003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerStarted","Data":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.033270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerStarted","Data":"b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.035518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerStarted","Data":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.038311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerStarted","Data":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.052366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrlzg" podStartSLOduration=3.2838232339999998 podStartE2EDuration="36.052354001s" podCreationTimestamp="2026-02-16 12:55:27 +0000 UTC" firstStartedPulling="2026-02-16 12:55:29.644397613 +0000 UTC m=+157.020746334" lastFinishedPulling="2026-02-16 12:56:02.41292838 +0000 UTC m=+189.789277101" observedRunningTime="2026-02-16 12:56:03.049888964 +0000 UTC m=+190.426237685" watchObservedRunningTime="2026-02-16 12:56:03.052354001 +0000 UTC m=+190.428702722" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.068876 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smtc5" podStartSLOduration=3.560533809 podStartE2EDuration="39.06885653s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.570676421 +0000 UTC m=+153.947025142" lastFinishedPulling="2026-02-16 12:56:02.078999142 +0000 UTC m=+189.455347863" observedRunningTime="2026-02-16 12:56:03.067523858 +0000 UTC m=+190.443872589" watchObservedRunningTime="2026-02-16 12:56:03.06885653 +0000 UTC m=+190.445205251" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.091021 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gcgnl" podStartSLOduration=2.8033915780000003 podStartE2EDuration="37.091001197s" podCreationTimestamp="2026-02-16 12:55:26 +0000 UTC" firstStartedPulling="2026-02-16 12:55:27.580871549 +0000 UTC m=+154.957220270" lastFinishedPulling="2026-02-16 12:56:01.868481178 +0000 UTC m=+189.244829889" observedRunningTime="2026-02-16 12:56:03.088588561 +0000 UTC m=+190.464937282" watchObservedRunningTime="2026-02-16 12:56:03.091001197 +0000 UTC m=+190.467349918" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.113929 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tbqv5" podStartSLOduration=2.751740623 podStartE2EDuration="37.113910548s" podCreationTimestamp="2026-02-16 12:55:26 +0000 UTC" firstStartedPulling="2026-02-16 12:55:27.593010161 +0000 UTC m=+154.969358882" lastFinishedPulling="2026-02-16 12:56:01.955180096 +0000 UTC m=+189.331528807" observedRunningTime="2026-02-16 12:56:03.11365096 +0000 UTC m=+190.489999681" watchObservedRunningTime="2026-02-16 12:56:03.113910548 +0000 UTC m=+190.490259269" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.395343 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") pod \"d95462a9-2f88-47a0-b230-2f824b38a575\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424215 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d95462a9-2f88-47a0-b230-2f824b38a575" (UID: "d95462a9-2f88-47a0-b230-2f824b38a575"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424366 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") pod \"d95462a9-2f88-47a0-b230-2f824b38a575\" (UID: \"d95462a9-2f88-47a0-b230-2f824b38a575\") " Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.424994 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95462a9-2f88-47a0-b230-2f824b38a575-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.446836 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d95462a9-2f88-47a0-b230-2f824b38a575" (UID: "d95462a9-2f88-47a0-b230-2f824b38a575"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.526741 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95462a9-2f88-47a0-b230-2f824b38a575-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:03 crc kubenswrapper[4740]: I0216 12:56:03.686277 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045400 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95462a9-2f88-47a0-b230-2f824b38a575","Type":"ContainerDied","Data":"249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8"} Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.045493 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249fdb9b89f9d4b810e4d5f438067b210d3881b760dc1d050f49a1fde20196f8" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.918757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:04 crc kubenswrapper[4740]: I0216 12:56:04.919120 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:05 crc kubenswrapper[4740]: I0216 12:56:05.046021 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.777312 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.777371 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:06 crc kubenswrapper[4740]: I0216 12:56:06.828011 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.096712 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.147441 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.147512 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.198186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.925685 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:07 crc kubenswrapper[4740]: I0216 12:56:07.925757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:08 crc kubenswrapper[4740]: I0216 12:56:08.103757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:08 crc kubenswrapper[4740]: I0216 12:56:08.959216 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrlzg" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" probeResult="failure" output=< Feb 16 12:56:08 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 12:56:08 crc kubenswrapper[4740]: > Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.194740 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258153 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:09 crc kubenswrapper[4740]: E0216 12:56:09.258389 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258416 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258542 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95462a9-2f88-47a0-b230-2f824b38a575" containerName="pruner" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.258988 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.261515 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.261591 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.306790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.306856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.314727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.408658 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.432805 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.576281 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:09 crc kubenswrapper[4740]: I0216 12:56:09.972018 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 12:56:10 crc kubenswrapper[4740]: I0216 12:56:10.079654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerStarted","Data":"1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d"} Feb 16 12:56:10 crc kubenswrapper[4740]: I0216 12:56:10.079869 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gcgnl" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" containerID="cri-o://b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" gracePeriod=2 Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.293845 4740 generic.go:334] "Generic (PLEG): container finished" podID="f80b641a-1e2b-4db3-9298-08042171a404" containerID="b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" exitCode=0 Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.293905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294256 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcgnl" event={"ID":"f80b641a-1e2b-4db3-9298-08042171a404","Type":"ContainerDied","Data":"0b376e259a09836cfaec84762b20e19cdb8174f29dd03942db440dec05590e0e"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.294645 4740 scope.go:117] "RemoveContainer" containerID="b41c83a6cada398da2a369d1028d362c634c4a1c17432a3088895c384269d048" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.302994 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerStarted","Data":"39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760"} Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.322288 4740 scope.go:117] "RemoveContainer" containerID="16e02cae8966336d5b6d2314925a616b32a6590e28fe0b840dfe710d1fb15fab" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.348089 4740 scope.go:117] "RemoveContainer" containerID="a95746ac673c65b743bb2c6ae6349ae88b4476464a895083995fbb1946f5d59e" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.351233 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.351217803 podStartE2EDuration="4.351217803s" podCreationTimestamp="2026-02-16 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:13.351102739 +0000 UTC m=+200.727451480" watchObservedRunningTime="2026-02-16 12:56:13.351217803 +0000 UTC m=+200.727566524" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.488087 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.488229 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities" (OuterVolumeSpecName: "utilities") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489278 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") pod \"f80b641a-1e2b-4db3-9298-08042171a404\" (UID: \"f80b641a-1e2b-4db3-9298-08042171a404\") " Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.489779 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.493528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg" (OuterVolumeSpecName: "kube-api-access-xjnrg") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "kube-api-access-xjnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:13 crc kubenswrapper[4740]: I0216 12:56:13.590652 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjnrg\" (UniqueName: \"kubernetes.io/projected/f80b641a-1e2b-4db3-9298-08042171a404-kube-api-access-xjnrg\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.099760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f80b641a-1e2b-4db3-9298-08042171a404" (UID: "f80b641a-1e2b-4db3-9298-08042171a404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.196230 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80b641a-1e2b-4db3-9298-08042171a404-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.310050 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcgnl" Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.311512 4740 generic.go:334] "Generic (PLEG): container finished" podID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerID="39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760" exitCode=0 Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.311561 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerDied","Data":"39c58584f44a332a5d4cc984fd318aa7650172a5c525be1d1bd0913c0c6b6760"} Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.346330 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.349376 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcgnl"] Feb 16 12:56:14 crc kubenswrapper[4740]: I0216 12:56:14.962169 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054247 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054511 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-content" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054528 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-content" Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054549 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-utilities" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054558 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="extract-utilities" Feb 16 12:56:15 crc kubenswrapper[4740]: E0216 12:56:15.054577 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054585 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.054702 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80b641a-1e2b-4db3-9298-08042171a404" containerName="registry-server" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.055181 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.065878 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.208938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.209261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.209517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.288761 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80b641a-1e2b-4db3-9298-08042171a404" path="/var/lib/kubelet/pods/f80b641a-1e2b-4db3-9298-08042171a404/volumes" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.310851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.317457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8"} Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.349470 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"installer-9-crc\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.379270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.555787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.574953 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.575036 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.575090 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.576045 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.576166 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" gracePeriod=600 Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.613869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") pod \"71598100-ab8f-489f-9a0a-d5396867ddc2\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.613950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") pod \"71598100-ab8f-489f-9a0a-d5396867ddc2\" (UID: \"71598100-ab8f-489f-9a0a-d5396867ddc2\") " Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.614095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "71598100-ab8f-489f-9a0a-d5396867ddc2" (UID: "71598100-ab8f-489f-9a0a-d5396867ddc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.614197 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71598100-ab8f-489f-9a0a-d5396867ddc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.618601 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "71598100-ab8f-489f-9a0a-d5396867ddc2" (UID: "71598100-ab8f-489f-9a0a-d5396867ddc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.714651 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71598100-ab8f-489f-9a0a-d5396867ddc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:15 crc kubenswrapper[4740]: I0216 12:56:15.777674 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 12:56:15 crc kubenswrapper[4740]: W0216 12:56:15.790179 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod64be474a_1d70_42d2_aa8b_977624363891.slice/crio-67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6 WatchSource:0}: Error finding container 67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6: Status 404 returned error can't find the container with id 67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324758 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.324899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.327614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerStarted","Data":"1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.327655 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerStarted","Data":"67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.329727 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.329766 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71598100-ab8f-489f-9a0a-d5396867ddc2","Type":"ContainerDied","Data":"1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332973 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1320e21be3a37032e7f7ddbdf737319e299f99317830fe9cd578e0614d523d5d" Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.332999 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.335388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.338235 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8" exitCode=0 Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.338278 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8"} Feb 16 12:56:16 crc kubenswrapper[4740]: I0216 12:56:16.392936 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.39292148 podStartE2EDuration="1.39292148s" podCreationTimestamp="2026-02-16 12:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:16.389545282 +0000 UTC m=+203.765894003" watchObservedRunningTime="2026-02-16 12:56:16.39292148 +0000 UTC m=+203.769270201" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.346835 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerStarted","Data":"3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.349027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerStarted","Data":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.350832 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" exitCode=0 Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.350913 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.365501 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwk79" podStartSLOduration=3.260086465 podStartE2EDuration="50.365482415s" podCreationTimestamp="2026-02-16 12:55:27 +0000 UTC" firstStartedPulling="2026-02-16 12:55:29.654861211 +0000 UTC m=+157.031209932" lastFinishedPulling="2026-02-16 12:56:16.760257131 +0000 UTC m=+204.136605882" observedRunningTime="2026-02-16 12:56:17.364398619 +0000 UTC m=+204.740747340" watchObservedRunningTime="2026-02-16 12:56:17.365482415 +0000 UTC m=+204.741831136" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.402680 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z48zk" podStartSLOduration=3.156457899 podStartE2EDuration="53.402660272s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.555749641 +0000 UTC m=+153.932098362" lastFinishedPulling="2026-02-16 12:56:16.801952024 +0000 UTC m=+204.178300735" observedRunningTime="2026-02-16 12:56:17.398052753 +0000 UTC m=+204.774401474" watchObservedRunningTime="2026-02-16 12:56:17.402660272 +0000 UTC m=+204.779008993" Feb 16 12:56:17 crc kubenswrapper[4740]: I0216 12:56:17.967398 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.009313 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.334907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.334964 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.365443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerStarted","Data":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} Feb 16 12:56:18 crc kubenswrapper[4740]: I0216 12:56:18.386410 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-22crz" podStartSLOduration=3.174872801 podStartE2EDuration="54.386394606s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.557167466 +0000 UTC m=+153.933516187" lastFinishedPulling="2026-02-16 12:56:17.768689271 +0000 UTC m=+205.145037992" observedRunningTime="2026-02-16 12:56:18.384314149 +0000 UTC m=+205.760662880" watchObservedRunningTime="2026-02-16 12:56:18.386394606 +0000 UTC m=+205.762743327" Feb 16 12:56:19 crc kubenswrapper[4740]: I0216 12:56:19.375372 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" probeResult="failure" output=< Feb 16 12:56:19 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 12:56:19 crc kubenswrapper[4740]: > Feb 16 12:56:20 crc kubenswrapper[4740]: I0216 12:56:20.378415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3"} Feb 16 12:56:21 crc kubenswrapper[4740]: I0216 12:56:21.387608 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3" exitCode=0 Feb 16 12:56:21 crc kubenswrapper[4740]: I0216 12:56:21.387750 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3"} Feb 16 12:56:23 crc kubenswrapper[4740]: I0216 12:56:23.405100 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerStarted","Data":"b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37"} Feb 16 12:56:23 crc kubenswrapper[4740]: I0216 12:56:23.428666 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzpc4" podStartSLOduration=2.77918883 podStartE2EDuration="59.428650825s" podCreationTimestamp="2026-02-16 12:55:24 +0000 UTC" firstStartedPulling="2026-02-16 12:55:26.565153757 +0000 UTC m=+153.941502478" lastFinishedPulling="2026-02-16 12:56:23.214615752 +0000 UTC m=+210.590964473" observedRunningTime="2026-02-16 12:56:23.424955147 +0000 UTC m=+210.801303868" watchObservedRunningTime="2026-02-16 12:56:23.428650825 +0000 UTC m=+210.804999546" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.788492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.791364 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:24 crc kubenswrapper[4740]: I0216 12:56:24.832664 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.166480 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.166684 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.209244 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.401235 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.401329 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.445667 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.471377 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:56:25 crc kubenswrapper[4740]: I0216 12:56:25.494155 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.395904 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.443367 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z48zk" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" containerID="cri-o://c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" gracePeriod=2 Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.795599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965764 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.965912 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") pod \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\" (UID: \"e9545e2f-e72f-4944-bc7a-ed9b052a34b0\") " Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.968062 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities" (OuterVolumeSpecName: "utilities") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:27 crc kubenswrapper[4740]: I0216 12:56:27.978261 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h" (OuterVolumeSpecName: "kube-api-access-frh8h") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "kube-api-access-frh8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.025365 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9545e2f-e72f-4944-bc7a-ed9b052a34b0" (UID: "e9545e2f-e72f-4944-bc7a-ed9b052a34b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067329 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067360 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.067372 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh8h\" (UniqueName: \"kubernetes.io/projected/e9545e2f-e72f-4944-bc7a-ed9b052a34b0-kube-api-access-frh8h\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.380141 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.418917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449626 4740 generic.go:334] "Generic (PLEG): container finished" podID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" exitCode=0 Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449719 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z48zk" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z48zk" event={"ID":"e9545e2f-e72f-4944-bc7a-ed9b052a34b0","Type":"ContainerDied","Data":"a9c9a2711d899c1e5a260796e95114ebf5c80382d12c8808c0846487a96c8aa1"} Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.449797 4740 scope.go:117] "RemoveContainer" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.463313 4740 scope.go:117] "RemoveContainer" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.483036 4740 scope.go:117] "RemoveContainer" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.497692 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.501878 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z48zk"] Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.501939 4740 scope.go:117] "RemoveContainer" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.502435 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": container with ID starting with c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2 not found: ID does not exist" containerID="c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502471 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2"} err="failed to get container status \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": rpc error: code = NotFound desc = could not find container \"c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2\": container with ID starting with c35e181c94a6bfea2678127228688d2129a492caed033158d95773bb1c1274c2 not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502511 4740 scope.go:117] "RemoveContainer" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.502945 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": container with ID starting with 671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc not found: ID does not exist" containerID="671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502981 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc"} err="failed to get container status \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": rpc error: code = NotFound desc = could not find container \"671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc\": container with ID starting with 671e402ec05b8aa409656f0a850acda6e8afc266f0f6e2096259a2a4cfa73bfc not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.502996 4740 scope.go:117] "RemoveContainer" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: E0216 12:56:28.503259 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": container with ID starting with 1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88 not found: ID does not exist" containerID="1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.503281 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88"} err="failed to get container status \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": rpc error: code = NotFound desc = could not find container \"1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88\": container with ID starting with 1c6186870b508f0c5551222601cb9f555747103803bfa61ad7d29d39eb4ced88 not found: ID does not exist" Feb 16 12:56:28 crc kubenswrapper[4740]: I0216 12:56:28.717649 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" containerID="cri-o://fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" gracePeriod=15 Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.289569 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" path="/var/lib/kubelet/pods/e9545e2f-e72f-4944-bc7a-ed9b052a34b0/volumes" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.457504 4740 generic.go:334] "Generic (PLEG): container finished" podID="a9a22462-173f-4075-927a-30493a5745d7" containerID="fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" exitCode=0 Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.457590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerDied","Data":"fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440"} Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.768705 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.899907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900662 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900760 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900899 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.900982 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901032 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901077 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901095 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.901113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") pod \"a9a22462-173f-4075-927a-30493a5745d7\" (UID: \"a9a22462-173f-4075-927a-30493a5745d7\") " Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902687 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902713 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9a22462-173f-4075-927a-30493a5745d7-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.902851 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.903105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.903324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906151 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906374 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.906549 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72" (OuterVolumeSpecName: "kube-api-access-n5w72") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "kube-api-access-n5w72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.907579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.908830 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:29 crc kubenswrapper[4740]: I0216 12:56:29.909158 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a9a22462-173f-4075-927a-30493a5745d7" (UID: "a9a22462-173f-4075-927a-30493a5745d7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.004487 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005132 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005325 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9a22462-173f-4075-927a-30493a5745d7-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005490 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005659 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.005891 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006051 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006181 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006356 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006480 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5w72\" (UniqueName: \"kubernetes.io/projected/a9a22462-173f-4075-927a-30493a5745d7-kube-api-access-n5w72\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006631 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.006776 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9a22462-173f-4075-927a-30493a5745d7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" event={"ID":"a9a22462-173f-4075-927a-30493a5745d7","Type":"ContainerDied","Data":"b1fdab80b8055470789558626b94e6fd689f065930bcfe2c60fd34eb94175732"} Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467847 4740 scope.go:117] "RemoveContainer" containerID="fbb95d26f626afc3bfc63396feee48e9a3723a0831baf7b1a599353c30ec5440" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.467907 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wknn7" Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.508018 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:30 crc kubenswrapper[4740]: I0216 12:56:30.521332 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wknn7"] Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.288381 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a22462-173f-4075-927a-30493a5745d7" path="/var/lib/kubelet/pods/a9a22462-173f-4075-927a-30493a5745d7/volumes" Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.798613 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:31 crc kubenswrapper[4740]: I0216 12:56:31.799018 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwk79" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" containerID="cri-o://3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" gracePeriod=2 Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.482457 4740 generic.go:334] "Generic (PLEG): container finished" podID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerID="3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" exitCode=0 Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.482680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1"} Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.703622 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845665 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.845797 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") pod \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\" (UID: \"fa69bf39-1ed0-42ba-91f9-c401e7fb9337\") " Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.847697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities" (OuterVolumeSpecName: "utilities") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.861126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh" (OuterVolumeSpecName: "kube-api-access-7m9jh") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "kube-api-access-7m9jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.947578 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m9jh\" (UniqueName: \"kubernetes.io/projected/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-kube-api-access-7m9jh\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.947622 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:32 crc kubenswrapper[4740]: I0216 12:56:32.974837 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa69bf39-1ed0-42ba-91f9-c401e7fb9337" (UID: "fa69bf39-1ed0-42ba-91f9-c401e7fb9337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.048235 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa69bf39-1ed0-42ba-91f9-c401e7fb9337-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwk79" event={"ID":"fa69bf39-1ed0-42ba-91f9-c401e7fb9337","Type":"ContainerDied","Data":"ce33be103aa47d28e69db79295eb5459d1dc46ee55c5e4d98d8d9854797067ed"} Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488334 4740 scope.go:117] "RemoveContainer" containerID="3facc8cd75cb42481cbeb449975400c1d7f9fa10b8fde33d3f19e60d331253a1" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.488368 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwk79" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.505214 4740 scope.go:117] "RemoveContainer" containerID="512a8521c4f3d6ef45799cb122a17152d8669df458582ef9c533c39abd849ac8" Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.506952 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.510080 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwk79"] Feb 16 12:56:33 crc kubenswrapper[4740]: I0216 12:56:33.518888 4740 scope.go:117] "RemoveContainer" containerID="2a749969623029aaccbdf27a9810d459d9c5039d65880ed9d91f3a4574878a8a" Feb 16 12:56:35 crc kubenswrapper[4740]: I0216 12:56:35.206214 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:35 crc kubenswrapper[4740]: I0216 12:56:35.290480 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" path="/var/lib/kubelet/pods/fa69bf39-1ed0-42ba-91f9-c401e7fb9337/volumes" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.197227 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.197731 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzpc4" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" containerID="cri-o://b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" gracePeriod=2 Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471506 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471787 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471829 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471844 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471853 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471865 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471874 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="extract-utilities" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471885 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471892 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471904 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471911 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471925 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471934 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471948 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471955 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: E0216 12:56:36.471969 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.471978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="extract-content" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472105 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa69bf39-1ed0-42ba-91f9-c401e7fb9337" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472118 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9545e2f-e72f-4944-bc7a-ed9b052a34b0" containerName="registry-server" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472128 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="71598100-ab8f-489f-9a0a-d5396867ddc2" containerName="pruner" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472141 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a22462-173f-4075-927a-30493a5745d7" containerName="oauth-openshift" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.472542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.474862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.476618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.476792 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.479241 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.479527 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.480952 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.480987 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.481054 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.482354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.483049 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.484310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.484335 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490796 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.490993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.491588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.495095 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.501247 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.523923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.525214 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.528578 4740 generic.go:334] "Generic (PLEG): container finished" podID="44198116-006f-4be3-ad53-3d32576dd681" containerID="b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" exitCode=0 Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.528622 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37"} Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593491 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593532 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.593579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.595714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.596237 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.597669 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.599870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.600256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.600546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.601548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.602347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.602938 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.605369 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.611161 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0e156096-dd90-4dd0-80ba-42d0642822ee-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.612025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bndkk\" (UniqueName: \"kubernetes.io/projected/0e156096-dd90-4dd0-80ba-42d0642822ee-kube-api-access-bndkk\") pod \"oauth-openshift-5cf8f9f8d-xvw7d\" (UID: \"0e156096-dd90-4dd0-80ba-42d0642822ee\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:36 crc kubenswrapper[4740]: I0216 12:56:36.799287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.168130 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200857 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200934 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.200985 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") pod \"44198116-006f-4be3-ad53-3d32576dd681\" (UID: \"44198116-006f-4be3-ad53-3d32576dd681\") " Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.201824 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities" (OuterVolumeSpecName: "utilities") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.205070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9" (OuterVolumeSpecName: "kube-api-access-xb4r9") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "kube-api-access-xb4r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.230394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d"] Feb 16 12:56:37 crc kubenswrapper[4740]: W0216 12:56:37.242030 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e156096_dd90_4dd0_80ba_42d0642822ee.slice/crio-c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf WatchSource:0}: Error finding container c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf: Status 404 returned error can't find the container with id c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.258219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44198116-006f-4be3-ad53-3d32576dd681" (UID: "44198116-006f-4be3-ad53-3d32576dd681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302707 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302747 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44198116-006f-4be3-ad53-3d32576dd681-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.302759 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb4r9\" (UniqueName: \"kubernetes.io/projected/44198116-006f-4be3-ad53-3d32576dd681-kube-api-access-xb4r9\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537228 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzpc4" event={"ID":"44198116-006f-4be3-ad53-3d32576dd681","Type":"ContainerDied","Data":"e130a9aace627f73e9efde47dbcd50406ac735047566ac4275095c2434589e89"} Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537522 4740 scope.go:117] "RemoveContainer" containerID="b56fdd76a68f0a0d399f103627cbf5f18b347ef09e6032c9af5b14a6c5355e37" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.537258 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzpc4" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.538226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" event={"ID":"0e156096-dd90-4dd0-80ba-42d0642822ee","Type":"ContainerStarted","Data":"c152d6ea1dfce7e2360da18d09c714bf8de7c16a910a5c1c3f6addcb645753bf"} Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.555305 4740 scope.go:117] "RemoveContainer" containerID="9eb6019c2628026c4d4234fab687e7e554de2d5f7e7f3b6193043a4204e435e3" Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.558098 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.562586 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzpc4"] Feb 16 12:56:37 crc kubenswrapper[4740]: I0216 12:56:37.567766 4740 scope.go:117] "RemoveContainer" containerID="2d50e15e7dfab2ba0d8e36c47eedb3a59a16e3076615834b16679a8be2cde520" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.544926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" event={"ID":"0e156096-dd90-4dd0-80ba-42d0642822ee","Type":"ContainerStarted","Data":"f90e71c3ade1b222a48b125680139be9e90427c868532cbcc36fa33a78369fef"} Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.546307 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.551353 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" Feb 16 12:56:38 crc kubenswrapper[4740]: I0216 12:56:38.564105 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-xvw7d" podStartSLOduration=35.564093185 podStartE2EDuration="35.564093185s" podCreationTimestamp="2026-02-16 12:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:56:38.563422943 +0000 UTC m=+225.939771654" watchObservedRunningTime="2026-02-16 12:56:38.564093185 +0000 UTC m=+225.940441906" Feb 16 12:56:39 crc kubenswrapper[4740]: I0216 12:56:39.288866 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44198116-006f-4be3-ad53-3d32576dd681" path="/var/lib/kubelet/pods/44198116-006f-4be3-ad53-3d32576dd681/volumes" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768113 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768883 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-utilities" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768898 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-utilities" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768937 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.768960 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-content" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.768967 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="extract-content" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769125 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44198116-006f-4be3-ad53-3d32576dd681" containerName="registry-server" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769619 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769668 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.769793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770313 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770401 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770382 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770356 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770538 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770559 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770574 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770584 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770598 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770607 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770617 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770625 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770636 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770644 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770655 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770663 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770673 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770680 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770395 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" gracePeriod=15 Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770831 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770847 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770856 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770865 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770874 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770883 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: E0216 12:56:53.770971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.770978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.771064 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.774368 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.812867 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.924928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.924985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:53 crc kubenswrapper[4740]: I0216 12:56:53.925692 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026852 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026874 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.026997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027036 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027073 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.027112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.101682 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:56:54 crc kubenswrapper[4740]: W0216 12:56:54.132715 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd WatchSource:0}: Error finding container fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd: Status 404 returned error can't find the container with id fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.138668 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.377147 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.377884 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.378385 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.378628 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.379101 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.379175 4740 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.379878 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.422438 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.580547 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.631429 4740 generic.go:334] "Generic (PLEG): container finished" podID="64be474a-1d70-42d2-aa8b-977624363891" containerID="1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.632158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerDied","Data":"1ab679fc04f940cb37eaeb115a58b3e3c81632adb21deae450b25d776eebad8d"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.633441 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634119 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.634861 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fdafda45f2a3726c1a5b1d1b6f87424d6dcecfc66009ab0e01dfa4a5f5ff64fd"} Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.635620 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.636202 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.639497 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.640860 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641490 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641511 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641521 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" exitCode=0 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641529 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" exitCode=2 Feb 16 12:56:54 crc kubenswrapper[4740]: I0216 12:56:54.641561 4740 scope.go:117] "RemoveContainer" containerID="605c1a6a86b72bc8b5b6858740e7d52552306315744f2e26faa07ae6fae93878" Feb 16 12:56:54 crc kubenswrapper[4740]: E0216 12:56:54.981565 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.654185 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:55 crc kubenswrapper[4740]: E0216 12:56:55.782498 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.984027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.984872 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:55 crc kubenswrapper[4740]: I0216 12:56:55.985056 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.071650 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.072503 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073179 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073724 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.073983 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock" (OuterVolumeSpecName: "var-lock") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177348 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") pod \"64be474a-1d70-42d2-aa8b-977624363891\" (UID: \"64be474a-1d70-42d2-aa8b-977624363891\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177450 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177496 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177615 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177770 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177794 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177862 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177889 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.177901 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64be474a-1d70-42d2-aa8b-977624363891-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.182194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "64be474a-1d70-42d2-aa8b-977624363891" (UID: "64be474a-1d70-42d2-aa8b-977624363891"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.279359 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64be474a-1d70-42d2-aa8b-977624363891-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"64be474a-1d70-42d2-aa8b-977624363891","Type":"ContainerDied","Data":"67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6"} Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666546 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67de1c01181359967df317bad921a5e396c51b3b28012621458d20b55dd02ab6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.666578 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.669695 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670655 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" exitCode=0 Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670717 4740 scope.go:117] "RemoveContainer" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.670836 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.682045 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.682944 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.683412 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.686973 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.687478 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.688046 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.692983 4740 scope.go:117] "RemoveContainer" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.717090 4740 scope.go:117] "RemoveContainer" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.730872 4740 scope.go:117] "RemoveContainer" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.742098 4740 scope.go:117] "RemoveContainer" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.755351 4740 scope.go:117] "RemoveContainer" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.776617 4740 scope.go:117] "RemoveContainer" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.777294 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": container with ID starting with f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72 not found: ID does not exist" containerID="f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.777338 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72"} err="failed to get container status \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": rpc error: code = NotFound desc = could not find container \"f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72\": container with ID starting with f1a9fe1bd6f29d6e5f296ade89f899ddb8a2f5463989a8b3dde242f78be9dd72 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.777375 4740 scope.go:117] "RemoveContainer" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.778374 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": container with ID starting with 513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb not found: ID does not exist" containerID="513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778401 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb"} err="failed to get container status \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": rpc error: code = NotFound desc = could not find container \"513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb\": container with ID starting with 513eb666e3192a65c76b4e4f6d52dd50f831a1107d18d03e3b4b20346ee348eb not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778417 4740 scope.go:117] "RemoveContainer" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.778769 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": container with ID starting with 3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6 not found: ID does not exist" containerID="3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778834 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6"} err="failed to get container status \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": rpc error: code = NotFound desc = could not find container \"3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6\": container with ID starting with 3e340489b1094eead3efce061aadefaeee620a13887e6e17e4e10f3d01cb83c6 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.778871 4740 scope.go:117] "RemoveContainer" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779274 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": container with ID starting with f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4 not found: ID does not exist" containerID="f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779323 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4"} err="failed to get container status \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": rpc error: code = NotFound desc = could not find container \"f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4\": container with ID starting with f8b497a4402f7197929650a17907dc201c6a7903a1e93ff89e410f4e3f5c1dd4 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779350 4740 scope.go:117] "RemoveContainer" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779646 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": container with ID starting with 0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32 not found: ID does not exist" containerID="0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779670 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32"} err="failed to get container status \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": rpc error: code = NotFound desc = could not find container \"0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32\": container with ID starting with 0aa2f122b796d7738c2d4db40db1870cc45c1cbf9c5f2c8354e8779bdcc04f32 not found: ID does not exist" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.779685 4740 scope.go:117] "RemoveContainer" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: E0216 12:56:56.779999 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": container with ID starting with 92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5 not found: ID does not exist" containerID="92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5" Feb 16 12:56:56 crc kubenswrapper[4740]: I0216 12:56:56.780084 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5"} err="failed to get container status \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": rpc error: code = NotFound desc = could not find container \"92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5\": container with ID starting with 92aef7d1e0561711a9e397f89bbb471c8f876fbebaa1bd94b66cfe2586aaf0e5 not found: ID does not exist" Feb 16 12:56:57 crc kubenswrapper[4740]: I0216 12:56:57.290157 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 12:56:57 crc kubenswrapper[4740]: E0216 12:56:57.383700 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 16 12:57:00 crc kubenswrapper[4740]: E0216 12:57:00.584430 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Feb 16 12:57:03 crc kubenswrapper[4740]: I0216 12:57:03.283295 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:03 crc kubenswrapper[4740]: I0216 12:57:03.284105 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:04 crc kubenswrapper[4740]: E0216 12:57:04.423361 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bb693d3a7df5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,LastTimestamp:2026-02-16 12:56:54.136651253 +0000 UTC m=+241.512999994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.280167 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.281705 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.282280 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.301027 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.301087 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.301730 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.302190 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: W0216 12:57:06.324992 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675 WatchSource:0}: Error finding container 70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675: Status 404 returned error can't find the container with id 70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675 Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736775 4740 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cde0837214fafd0fe31cb3cc2d39a4f5a6bdafa66ea80202d737e133015cd944" exitCode=0 Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cde0837214fafd0fe31cb3cc2d39a4f5a6bdafa66ea80202d737e133015cd944"} Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.736939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"70bfd7a79555bd42c122c87030ea7341b4c487e250a3bfff3c5746280c07e675"} Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737491 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737539 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.737836 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: I0216 12:57:06.738133 4740 status_manager.go:851] "Failed to get status for pod" podUID="64be474a-1d70-42d2-aa8b-977624363891" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.738242 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:06 crc kubenswrapper[4740]: E0216 12:57:06.985708 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ae77b89f3842905ed9c4faa69998a8e3b19fe2701429b64ce778e7b8fa5ac39"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746799 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be9a7b7b7e1acdd23322a4b3439e8199407e4d11022101adfeacad6f6234070c"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"59cc29256725447f10463fb188d447715c65be3de0e68d019c75af8c7bdc4329"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.746844 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c94ac9c7dd9b8c886e2dc911d18c5cb054c6ec5303cdbcdfdf34cbd1700e34a"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749625 4740 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6" exitCode=1 Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.749651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6"} Feb 16 12:57:07 crc kubenswrapper[4740]: I0216 12:57:07.751006 4740 scope.go:117] "RemoveContainer" containerID="de1480070d6dda36cccc4ee8917d450b6602866ba32bc7874385702cf197b0d6" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.757759 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.758205 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec689bc0b6eac6457c59bbbccd852365ae956d00ef4ab3b43e54faa45aed03ca"} Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.761868 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec"} Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762078 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762154 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:08 crc kubenswrapper[4740]: I0216 12:57:08.762170 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:10 crc kubenswrapper[4740]: I0216 12:57:10.305477 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.303550 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.303637 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:11 crc kubenswrapper[4740]: I0216 12:57:11.310879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.773235 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.804883 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.806692 4740 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec" exitCode=255 Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.806771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec"} Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.807316 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.807356 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.811037 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4bce1393-19e6-43be-bc32-b015f9dd4593" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.811283 4740 scope.go:117] "RemoveContainer" containerID="fe92759d401638af0f67ab6bd256d3d9395d0d79ee4cc637b3e4f7878377a8ec" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.814584 4740 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://6c94ac9c7dd9b8c886e2dc911d18c5cb054c6ec5303cdbcdfdf34cbd1700e34a" Feb 16 12:57:13 crc kubenswrapper[4740]: I0216 12:57:13.814614 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.815170 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3a165460d95a26aacb38df0d4126c1baa3f95a50b17cf37d4c030a2208dab07"} Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824369 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824461 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.824492 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.931504 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:14 crc kubenswrapper[4740]: I0216 12:57:14.938108 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:15 crc kubenswrapper[4740]: I0216 12:57:15.830588 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:15 crc kubenswrapper[4740]: I0216 12:57:15.830876 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31984faa-a340-44ed-868a-5e6e2a8dab7e" Feb 16 12:57:20 crc kubenswrapper[4740]: I0216 12:57:20.309967 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 12:57:22 crc kubenswrapper[4740]: I0216 12:57:22.928956 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 12:57:23 crc kubenswrapper[4740]: I0216 12:57:23.265854 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 12:57:23 crc kubenswrapper[4740]: I0216 12:57:23.299620 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4bce1393-19e6-43be-bc32-b015f9dd4593" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.287793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.741558 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.911562 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.911600 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 12:57:24 crc kubenswrapper[4740]: I0216 12:57:24.997178 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.052968 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.315632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.489700 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.614626 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.778560 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 12:57:25 crc kubenswrapper[4740]: I0216 12:57:25.977130 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.093645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.202249 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.266132 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.424055 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.429973 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.651581 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.662543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.710052 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.946113 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.979292 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:57:26 crc kubenswrapper[4740]: I0216 12:57:26.987100 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.056791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.169150 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.280056 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.303875 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.386316 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.421387 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.422021 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.454100 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.587045 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.623307 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.717622 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.748612 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 12:57:27 crc kubenswrapper[4740]: I0216 12:57:27.749427 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.035709 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.105299 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.147215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.250686 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.318035 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.351105 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.467016 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.478991 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.761743 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 12:57:28 crc kubenswrapper[4740]: I0216 12:57:28.767960 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.011757 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.019979 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.083540 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.083849 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.100966 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.230306 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.293170 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.417144 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.546859 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.562142 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.625754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.834601 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.962503 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.984505 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.994059 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 12:57:29 crc kubenswrapper[4740]: I0216 12:57:29.999508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.005679 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.127896 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.155319 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.193223 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.237920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.241928 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.340450 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.378754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.401709 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.418277 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.429044 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.553322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.571667 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.635667 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.637053 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.715413 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.740999 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.752869 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.824233 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.844256 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.918221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 12:57:30 crc kubenswrapper[4740]: I0216 12:57:30.995110 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.005122 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.057951 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.079384 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.094713 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.098192 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.126606 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.149880 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.214223 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.387003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.406243 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.413906 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.437645 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.442898 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.448224 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.453687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.485682 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.582765 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.597911 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.653704 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.784914 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.828345 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.898486 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.904418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 12:57:31 crc kubenswrapper[4740]: I0216 12:57:31.933655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.036712 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.083930 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.175421 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.185957 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.230335 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.250183 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.292348 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.332094 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.451335 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.480473 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.510206 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.554141 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.558396 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.617970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.655403 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.709956 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.715458 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.755679 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.772997 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.795404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.799864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.837310 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.930787 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 12:57:32 crc kubenswrapper[4740]: I0216 12:57:32.948848 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.025402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.103861 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.208163 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.221737 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.249157 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.277340 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.302843 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.348374 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.401752 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.414843 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.542366 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.568195 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.601116 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.659662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.660888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.707339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.795977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.814721 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.863872 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.888835 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.960860 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 12:57:33 crc kubenswrapper[4740]: I0216 12:57:33.984664 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.010520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.062918 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.130472 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.168436 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.227083 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.234511 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.314734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.314777 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.358656 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.374332 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.437612 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.608873 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.704172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.728267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.735093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.774520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.788625 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.809835 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.902395 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.924882 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 12:57:34 crc kubenswrapper[4740]: I0216 12:57:34.988703 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.008644 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.095019 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.196877 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.403767 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.426984 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.452711 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.495157 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.504938 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.527312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.589592 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.604286 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.688072 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.709262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.722175 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.783393 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.785447 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.887783 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.925077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 12:57:35 crc kubenswrapper[4740]: I0216 12:57:35.927196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.092150 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.157551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.159727 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.384003 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.393364 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.411046 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.436324 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.441604 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.453710 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.567767 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.587166 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.676948 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.697849 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.772694 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.835688 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.840941 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.967707 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.972588 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.982224 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 12:57:36 crc kubenswrapper[4740]: I0216 12:57:36.984979 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.252661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.271364 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.552267 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.562627 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.617432 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.642499 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.643746 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.810006 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:37 crc kubenswrapper[4740]: I0216 12:57:37.837943 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.074995 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.317687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.348678 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.355051 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.416693 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.498302 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.545129 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.682254 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.798929 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.841410 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 12:57:38 crc kubenswrapper[4740]: I0216 12:57:38.931291 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.062349 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.276825 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.279366 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.27934079 podStartE2EDuration="46.27934079s" podCreationTimestamp="2026-02-16 12:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:13.338619595 +0000 UTC m=+260.714968386" watchObservedRunningTime="2026-02-16 12:57:39.27934079 +0000 UTC m=+286.655689521" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.288187 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.288236 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.294286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.316583 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.316563218 podStartE2EDuration="26.316563218s" podCreationTimestamp="2026-02-16 12:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:39.308188239 +0000 UTC m=+286.684536980" watchObservedRunningTime="2026-02-16 12:57:39.316563218 +0000 UTC m=+286.692911949" Feb 16 12:57:39 crc kubenswrapper[4740]: I0216 12:57:39.953334 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.024285 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.276626 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.408228 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.476711 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.486586 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.751263 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 12:57:40 crc kubenswrapper[4740]: I0216 12:57:40.796494 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.334853 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.377499 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.660288 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 12:57:41 crc kubenswrapper[4740]: I0216 12:57:41.782080 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 12:57:42 crc kubenswrapper[4740]: I0216 12:57:42.402878 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 12:57:42 crc kubenswrapper[4740]: I0216 12:57:42.676404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 12:57:43 crc kubenswrapper[4740]: I0216 12:57:43.224121 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:57:47 crc kubenswrapper[4740]: I0216 12:57:47.326618 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:47 crc kubenswrapper[4740]: I0216 12:57:47.327526 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" gracePeriod=5 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.052945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.053478 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-22crz" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" containerID="cri-o://3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.071930 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.072239 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smtc5" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" containerID="cri-o://a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.081649 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.081913 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" containerID="cri-o://b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.088931 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.089286 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tbqv5" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" containerID="cri-o://968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.107529 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.107805 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrlzg" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" containerID="cri-o://0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" gracePeriod=30 Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127102 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: E0216 12:57:48.127363 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127382 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: E0216 12:57:48.127392 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127399 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127482 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="64be474a-1d70-42d2-aa8b-977624363891" containerName="installer" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127495 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.127873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.172406 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.180896 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.276905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.278027 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.287999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db2dd193-ab4e-4011-988a-d516f2da367e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.296969 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6b22\" (UniqueName: \"kubernetes.io/projected/db2dd193-ab4e-4011-988a-d516f2da367e-kube-api-access-g6b22\") pod \"marketplace-operator-79b997595-xsssg\" (UID: \"db2dd193-ab4e-4011-988a-d516f2da367e\") " pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.489274 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.493628 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.524052 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.528916 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.531899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.579997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580091 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580138 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580174 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580199 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580218 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") pod \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\" (UID: \"eb4cf07f-4486-4ff8-88d3-b04296a09ece\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580250 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") pod \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\" (UID: \"70e65531-7cfb-415d-a0a7-25288c2cd5c8\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580282 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") pod \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\" (UID: \"14e85e39-c3bc-4944-8b13-a4e405ccafdc\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580335 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") pod \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\" (UID: \"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.580963 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.581446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities" (OuterVolumeSpecName: "utilities") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.581958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities" (OuterVolumeSpecName: "utilities") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.582599 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities" (OuterVolumeSpecName: "utilities") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.585715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.595594 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619093 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb" (OuterVolumeSpecName: "kube-api-access-lfrxb") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "kube-api-access-lfrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4" (OuterVolumeSpecName: "kube-api-access-89vb4") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "kube-api-access-89vb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.619198 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j" (OuterVolumeSpecName: "kube-api-access-jk44j") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "kube-api-access-jk44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.621237 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8" (OuterVolumeSpecName: "kube-api-access-wbrl8") pod "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" (UID: "f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b"). InnerVolumeSpecName "kube-api-access-wbrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.631342 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70e65531-7cfb-415d-a0a7-25288c2cd5c8" (UID: "70e65531-7cfb-415d-a0a7-25288c2cd5c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.667484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14e85e39-c3bc-4944-8b13-a4e405ccafdc" (UID: "14e85e39-c3bc-4944-8b13-a4e405ccafdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.681440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.681493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.682867 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities" (OuterVolumeSpecName: "utilities") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.688983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") pod \"4fd80862-652c-4fa2-a591-44a3cc76379d\" (UID: \"4fd80862-652c-4fa2-a591-44a3cc76379d\") " Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689448 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689465 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689476 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689490 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk44j\" (UniqueName: \"kubernetes.io/projected/14e85e39-c3bc-4944-8b13-a4e405ccafdc-kube-api-access-jk44j\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689500 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89vb4\" (UniqueName: \"kubernetes.io/projected/eb4cf07f-4486-4ff8-88d3-b04296a09ece-kube-api-access-89vb4\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689509 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689518 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrxb\" (UniqueName: \"kubernetes.io/projected/70e65531-7cfb-415d-a0a7-25288c2cd5c8-kube-api-access-lfrxb\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689526 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689534 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e85e39-c3bc-4944-8b13-a4e405ccafdc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689543 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbrl8\" (UniqueName: \"kubernetes.io/projected/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-kube-api-access-wbrl8\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689551 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.689559 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e65531-7cfb-415d-a0a7-25288c2cd5c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.692070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l" (OuterVolumeSpecName: "kube-api-access-l9s2l") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "kube-api-access-l9s2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.706668 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fd80862-652c-4fa2-a591-44a3cc76379d" (UID: "4fd80862-652c-4fa2-a591-44a3cc76379d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.726437 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xsssg"] Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.775387 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb4cf07f-4486-4ff8-88d3-b04296a09ece" (UID: "eb4cf07f-4486-4ff8-88d3-b04296a09ece"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790339 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4cf07f-4486-4ff8-88d3-b04296a09ece-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790379 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9s2l\" (UniqueName: \"kubernetes.io/projected/4fd80862-652c-4fa2-a591-44a3cc76379d-kube-api-access-l9s2l\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:48 crc kubenswrapper[4740]: I0216 12:57:48.790394 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd80862-652c-4fa2-a591-44a3cc76379d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024685 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" event={"ID":"db2dd193-ab4e-4011-988a-d516f2da367e","Type":"ContainerStarted","Data":"48c754b80487afff4b365f84d5b4b7eb63216492fe5e150f580c94fdc92e82b1"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" event={"ID":"db2dd193-ab4e-4011-988a-d516f2da367e","Type":"ContainerStarted","Data":"3e9d0deb96d52dc1803a334ccb068715b43914ab065a3f0f1f47258379f9e2a9"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.024759 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025863 4740 generic.go:334] "Generic (PLEG): container finished" podID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerDied","Data":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025949 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025974 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d5vhg" event={"ID":"f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b","Type":"ContainerDied","Data":"ecc39d12cb6ac857f193b234c0c65095915f019fb5a183124161212d668749a6"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.025995 4740 scope.go:117] "RemoveContainer" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.027490 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xsssg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.027601 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" podUID="db2dd193-ab4e-4011-988a-d516f2da367e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028080 4740 generic.go:334] "Generic (PLEG): container finished" podID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028133 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbqv5" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.028136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbqv5" event={"ID":"4fd80862-652c-4fa2-a591-44a3cc76379d","Type":"ContainerDied","Data":"f4f0e4c138876a419d8305e7e6a9bc95934cbb191f330681343c30e1610937eb"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032425 4740 generic.go:334] "Generic (PLEG): container finished" podID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032521 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032560 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smtc5" event={"ID":"14e85e39-c3bc-4944-8b13-a4e405ccafdc","Type":"ContainerDied","Data":"30342fb7e4ac42c29ddfbf6e245edd8370e6082c882f3ddc8fc68fa25e67ec8b"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.032524 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smtc5" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034769 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034885 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrlzg" event={"ID":"eb4cf07f-4486-4ff8-88d3-b04296a09ece","Type":"ContainerDied","Data":"bd639f85fe34dad1670a6a91b1a5a271314aa8af3eb87c2254c3dba3da066707"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.034952 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrlzg" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041170 4740 generic.go:334] "Generic (PLEG): container finished" podID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" exitCode=0 Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22crz" event={"ID":"70e65531-7cfb-415d-a0a7-25288c2cd5c8","Type":"ContainerDied","Data":"e2704b65ce01fba3c60e03244a825b4b8122c50b215c9372a0b6818fde2a82aa"} Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.041321 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22crz" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.050291 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" podStartSLOduration=1.05027396 podStartE2EDuration="1.05027396s" podCreationTimestamp="2026-02-16 12:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:57:49.046643723 +0000 UTC m=+296.422992454" watchObservedRunningTime="2026-02-16 12:57:49.05027396 +0000 UTC m=+296.426622681" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055144 4740 scope.go:117] "RemoveContainer" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.055689 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": container with ID starting with b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf not found: ID does not exist" containerID="b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055738 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf"} err="failed to get container status \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": rpc error: code = NotFound desc = could not find container \"b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf\": container with ID starting with b8decf6f11cd4ae75b82680e5df1a37ee39fd6a2b9d1b07cae23ba3fd78173cf not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.055771 4740 scope.go:117] "RemoveContainer" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.075209 4740 scope.go:117] "RemoveContainer" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.086019 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.093204 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbqv5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.096476 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.100891 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrlzg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.107133 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.110965 4740 scope.go:117] "RemoveContainer" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.114275 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smtc5"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.120147 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.129986 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-22crz"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.133487 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.134706 4740 scope.go:117] "RemoveContainer" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.135447 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": container with ID starting with 968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b not found: ID does not exist" containerID="968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135477 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b"} err="failed to get container status \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": rpc error: code = NotFound desc = could not find container \"968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b\": container with ID starting with 968146eb59f3869ac1074b0da2ecd8fc8813049adf62da159e4f1ac3281ef72b not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135497 4740 scope.go:117] "RemoveContainer" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.135736 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": container with ID starting with 03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc not found: ID does not exist" containerID="03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135752 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc"} err="failed to get container status \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": rpc error: code = NotFound desc = could not find container \"03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc\": container with ID starting with 03b093da7a0fc85f086b3011977f669c91ece99972f32b603c54bfac90c57acc not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.135764 4740 scope.go:117] "RemoveContainer" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.136023 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": container with ID starting with fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8 not found: ID does not exist" containerID="fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.136040 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8"} err="failed to get container status \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": rpc error: code = NotFound desc = could not find container \"fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8\": container with ID starting with fa2fdeba35c5d39f050bf650dc42a8e5140f9e9c3beaf46dd1b9c1afa74ab8a8 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.136052 4740 scope.go:117] "RemoveContainer" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.137957 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d5vhg"] Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.147975 4740 scope.go:117] "RemoveContainer" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.164156 4740 scope.go:117] "RemoveContainer" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.175435 4740 scope.go:117] "RemoveContainer" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.175915 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": container with ID starting with a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483 not found: ID does not exist" containerID="a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176033 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483"} err="failed to get container status \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": rpc error: code = NotFound desc = could not find container \"a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483\": container with ID starting with a5e38e5bacedfecc4938132ae3be939d1878ff1c4881e7aecc90f5de21fab483 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176127 4740 scope.go:117] "RemoveContainer" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.176439 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": container with ID starting with acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976 not found: ID does not exist" containerID="acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176460 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976"} err="failed to get container status \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": rpc error: code = NotFound desc = could not find container \"acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976\": container with ID starting with acb2725291c23b1bbeb99dab1d90410e8277a1d4a3033a767eedd0e440bd2976 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176474 4740 scope.go:117] "RemoveContainer" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.176819 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": container with ID starting with c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106 not found: ID does not exist" containerID="c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.176909 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106"} err="failed to get container status \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": rpc error: code = NotFound desc = could not find container \"c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106\": container with ID starting with c6d2b90fd5a479af9f3ea961aaf1d945cb9dcf2ce5fc5f3ac313d0c4efbe6106 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.177063 4740 scope.go:117] "RemoveContainer" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.193760 4740 scope.go:117] "RemoveContainer" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.210384 4740 scope.go:117] "RemoveContainer" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.285166 4740 scope.go:117] "RemoveContainer" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.286583 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": container with ID starting with 0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2 not found: ID does not exist" containerID="0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286624 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2"} err="failed to get container status \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": rpc error: code = NotFound desc = could not find container \"0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2\": container with ID starting with 0a4f07da851e8ed659b887d60f438cdb43e0c761f083e0096f294c2e64c94eb2 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286652 4740 scope.go:117] "RemoveContainer" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.286892 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": container with ID starting with 6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae not found: ID does not exist" containerID="6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286916 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae"} err="failed to get container status \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": rpc error: code = NotFound desc = could not find container \"6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae\": container with ID starting with 6b66783ead4e15ae2bebbf45d7476f4d9e5dba5b43f7a4cecbb58565d34babae not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.286930 4740 scope.go:117] "RemoveContainer" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.287160 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": container with ID starting with 9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d not found: ID does not exist" containerID="9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.287180 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d"} err="failed to get container status \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": rpc error: code = NotFound desc = could not find container \"9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d\": container with ID starting with 9826b3ed86c86b23dea3a42f3086d3bc1c05146393ec86ebd5075e372da6d38d not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.287211 4740 scope.go:117] "RemoveContainer" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.288412 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" path="/var/lib/kubelet/pods/14e85e39-c3bc-4944-8b13-a4e405ccafdc/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.289330 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" path="/var/lib/kubelet/pods/4fd80862-652c-4fa2-a591-44a3cc76379d/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.289978 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" path="/var/lib/kubelet/pods/70e65531-7cfb-415d-a0a7-25288c2cd5c8/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.291197 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" path="/var/lib/kubelet/pods/eb4cf07f-4486-4ff8-88d3-b04296a09ece/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.293666 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" path="/var/lib/kubelet/pods/f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b/volumes" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.320240 4740 scope.go:117] "RemoveContainer" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.341238 4740 scope.go:117] "RemoveContainer" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.355749 4740 scope.go:117] "RemoveContainer" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.356160 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": container with ID starting with 3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9 not found: ID does not exist" containerID="3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356204 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9"} err="failed to get container status \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": rpc error: code = NotFound desc = could not find container \"3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9\": container with ID starting with 3735fca8806f12a518236cf0d1946103bcab511630ed3dd1015c08e425a364e9 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356225 4740 scope.go:117] "RemoveContainer" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.356567 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": container with ID starting with 0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7 not found: ID does not exist" containerID="0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356709 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7"} err="failed to get container status \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": rpc error: code = NotFound desc = could not find container \"0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7\": container with ID starting with 0c74d20b9de817cde228532c5697be42ca2d8984a711bfcb9cd2869178622fc7 not found: ID does not exist" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.356920 4740 scope.go:117] "RemoveContainer" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: E0216 12:57:49.358484 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": container with ID starting with 681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf not found: ID does not exist" containerID="681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf" Feb 16 12:57:49 crc kubenswrapper[4740]: I0216 12:57:49.358530 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf"} err="failed to get container status \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": rpc error: code = NotFound desc = could not find container \"681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf\": container with ID starting with 681aaf48807dacd68f90b1c74efa90b44e57d917e3b65b7ba93301a79d1633cf not found: ID does not exist" Feb 16 12:57:50 crc kubenswrapper[4740]: I0216 12:57:50.055531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xsssg" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.908283 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.909651 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.941881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942147 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942454 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942050 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942277 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.942712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943196 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943286 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943348 4740 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.943417 4740 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:52 crc kubenswrapper[4740]: I0216 12:57:52.951758 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.044254 4740 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068144 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068186 4740 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" exitCode=137 Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068225 4740 scope.go:117] "RemoveContainer" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.068320 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.080954 4740 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.089194 4740 scope.go:117] "RemoveContainer" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: E0216 12:57:53.091221 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": container with ID starting with d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7 not found: ID does not exist" containerID="d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.091253 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7"} err="failed to get container status \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": rpc error: code = NotFound desc = could not find container \"d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7\": container with ID starting with d5f2049c4a9926bbf882a8fe1f3e03ed3fdd6faccaa2931ce0542d5bf6651fe7 not found: ID does not exist" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.287770 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.288155 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.300724 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.300784 4740 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f1fcf95b-d1a1-43f9-a05e-2f6cdb428a46" Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.308594 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 12:57:53 crc kubenswrapper[4740]: I0216 12:57:53.308641 4740 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f1fcf95b-d1a1-43f9-a05e-2f6cdb428a46" Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.737649 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.738449 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" containerID="cri-o://8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" gracePeriod=30 Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.855033 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:06 crc kubenswrapper[4740]: I0216 12:58:06.855538 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" containerID="cri-o://ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" gracePeriod=30 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.133094 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135502 4740 generic.go:334] "Generic (PLEG): container finished" podID="798d1269-3882-45e8-898e-a625cf386089" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" exitCode=0 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerDied","Data":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" event={"ID":"798d1269-3882-45e8-898e-a625cf386089","Type":"ContainerDied","Data":"29e6c5dab661956c91b79a723fe07411f83f7e5c787f55a2531731add29989ac"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135599 4740 scope.go:117] "RemoveContainer" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.135653 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tdlx8" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.137904 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerID="ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" exitCode=0 Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.137934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerDied","Data":"ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9"} Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.166412 4740 scope.go:117] "RemoveContainer" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: E0216 12:58:07.168802 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": container with ID starting with 8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f not found: ID does not exist" containerID="8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.168858 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f"} err="failed to get container status \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": rpc error: code = NotFound desc = could not find container \"8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f\": container with ID starting with 8ed60f4672a1a42a6545944a9724ea7ecaae0d0b0a87cc0edd91afa608c94f7f not found: ID does not exist" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251485 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.251628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") pod \"798d1269-3882-45e8-898e-a625cf386089\" (UID: \"798d1269-3882-45e8-898e-a625cf386089\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.252654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.252670 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca" (OuterVolumeSpecName: "client-ca") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.253116 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config" (OuterVolumeSpecName: "config") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.257437 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.257481 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r" (OuterVolumeSpecName: "kube-api-access-bkn8r") pod "798d1269-3882-45e8-898e-a625cf386089" (UID: "798d1269-3882-45e8-898e-a625cf386089"). InnerVolumeSpecName "kube-api-access-bkn8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.264142 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353019 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353071 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") pod \"3b3c2258-4f58-414c-a893-c721b5ac9c03\" (UID: \"3b3c2258-4f58-414c-a893-c721b5ac9c03\") " Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353401 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353420 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353431 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkn8r\" (UniqueName: \"kubernetes.io/projected/798d1269-3882-45e8-898e-a625cf386089-kube-api-access-bkn8r\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353440 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798d1269-3882-45e8-898e-a625cf386089-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353448 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798d1269-3882-45e8-898e-a625cf386089-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353758 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config" (OuterVolumeSpecName: "config") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.353798 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca" (OuterVolumeSpecName: "client-ca") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.356190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.356260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj" (OuterVolumeSpecName: "kube-api-access-h6tnj") pod "3b3c2258-4f58-414c-a893-c721b5ac9c03" (UID: "3b3c2258-4f58-414c-a893-c721b5ac9c03"). InnerVolumeSpecName "kube-api-access-h6tnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.453381 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454468 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454525 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b3c2258-4f58-414c-a893-c721b5ac9c03-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454542 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b3c2258-4f58-414c-a893-c721b5ac9c03-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.454557 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6tnj\" (UniqueName: \"kubernetes.io/projected/3b3c2258-4f58-414c-a893-c721b5ac9c03-kube-api-access-h6tnj\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:07 crc kubenswrapper[4740]: I0216 12:58:07.458376 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tdlx8"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.143964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" event={"ID":"3b3c2258-4f58-414c-a893-c721b5ac9c03","Type":"ContainerDied","Data":"fa1231c777ac869082e12b271c9de8f207a251381328df828b3f2a937306e447"} Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.144013 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.144015 4740 scope.go:117] "RemoveContainer" containerID="ba6d6a7ff15c9121a27bcfb14f6c962701892713ba24644d2ad95665183ec8f9" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.176525 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.179347 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wrjdd"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.371742 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372022 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372057 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372066 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372077 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372099 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372109 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372120 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372129 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372141 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372151 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372163 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372172 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372180 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372190 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372202 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372210 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372219 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372227 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372239 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372247 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372260 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372267 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372278 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372286 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="extract-utilities" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372297 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372304 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="extract-content" Feb 16 12:58:08 crc kubenswrapper[4740]: E0216 12:58:08.372315 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372323 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372429 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="798d1269-3882-45e8-898e-a625cf386089" containerName="controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372443 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" containerName="route-controller-manager" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372460 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4cf07f-4486-4ff8-88d3-b04296a09ece" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e65531-7cfb-415d-a0a7-25288c2cd5c8" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372479 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e836dd-0850-48d2-b0b2-1dcd6ed3fd4b" containerName="marketplace-operator" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372489 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd80862-652c-4fa2-a591-44a3cc76379d" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372500 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e85e39-c3bc-4944-8b13-a4e405ccafdc" containerName="registry-server" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.372995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375102 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375181 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375331 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375383 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.375508 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.376044 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.376657 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.384713 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.384734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385015 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385030 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385235 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.385235 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.395262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.395656 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.400209 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.465976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466062 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466237 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466384 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.466455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.567983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568032 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.568993 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.569081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.569291 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.570857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.572448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.573043 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.589897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"route-controller-manager-567b8dc5b4-8thgk\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.592472 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"controller-manager-797cb9f85d-67nzc\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.687517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:08 crc kubenswrapper[4740]: I0216 12:58:08.694182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.072844 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:58:09 crc kubenswrapper[4740]: W0216 12:58:09.076844 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae WatchSource:0}: Error finding container 0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae: Status 404 returned error can't find the container with id 0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.107889 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:09 crc kubenswrapper[4740]: W0216 12:58:09.112442 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb211397a_75e6_4a63_9e58_5320e07554e9.slice/crio-647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc WatchSource:0}: Error finding container 647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc: Status 404 returned error can't find the container with id 647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.151519 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerStarted","Data":"647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc"} Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.153185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerStarted","Data":"0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae"} Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.299684 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3c2258-4f58-414c-a893-c721b5ac9c03" path="/var/lib/kubelet/pods/3b3c2258-4f58-414c-a893-c721b5ac9c03/volumes" Feb 16 12:58:09 crc kubenswrapper[4740]: I0216 12:58:09.302032 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798d1269-3882-45e8-898e-a625cf386089" path="/var/lib/kubelet/pods/798d1269-3882-45e8-898e-a625cf386089/volumes" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.159431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerStarted","Data":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.159854 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.160736 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerStarted","Data":"0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd"} Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.161022 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.169197 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.169418 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:58:10 crc kubenswrapper[4740]: I0216 12:58:10.176712 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" podStartSLOduration=2.176695034 podStartE2EDuration="2.176695034s" podCreationTimestamp="2026-02-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:10.174341568 +0000 UTC m=+317.550690289" watchObservedRunningTime="2026-02-16 12:58:10.176695034 +0000 UTC m=+317.553043755" Feb 16 12:58:15 crc kubenswrapper[4740]: I0216 12:58:15.575908 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:58:15 crc kubenswrapper[4740]: I0216 12:58:15.576659 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.737236 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" podStartSLOduration=18.73721805 podStartE2EDuration="18.73721805s" podCreationTimestamp="2026-02-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:10.20761917 +0000 UTC m=+317.583967901" watchObservedRunningTime="2026-02-16 12:58:26.73721805 +0000 UTC m=+334.113566771" Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.741284 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:26 crc kubenswrapper[4740]: I0216 12:58:26.741995 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" containerID="cri-o://0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" gracePeriod=30 Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.258626 4740 generic.go:334] "Generic (PLEG): container finished" podID="b211397a-75e6-4a63-9e58-5320e07554e9" containerID="0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" exitCode=0 Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.258748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerDied","Data":"0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd"} Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.259055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" event={"ID":"b211397a-75e6-4a63-9e58-5320e07554e9","Type":"ContainerDied","Data":"647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc"} Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.259079 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647e92f8d964deb29c6057989601af7f3f4c8c7057159ece1dd57e7deec0affc" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.274668 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401454 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.401649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") pod \"b211397a-75e6-4a63-9e58-5320e07554e9\" (UID: \"b211397a-75e6-4a63-9e58-5320e07554e9\") " Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402918 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config" (OuterVolumeSpecName: "config") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.402925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.408277 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q" (OuterVolumeSpecName: "kube-api-access-4644q") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "kube-api-access-4644q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.408334 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b211397a-75e6-4a63-9e58-5320e07554e9" (UID: "b211397a-75e6-4a63-9e58-5320e07554e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.502987 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4644q\" (UniqueName: \"kubernetes.io/projected/b211397a-75e6-4a63-9e58-5320e07554e9-kube-api-access-4644q\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503023 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503036 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503048 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b211397a-75e6-4a63-9e58-5320e07554e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:27 crc kubenswrapper[4740]: I0216 12:58:27.503058 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b211397a-75e6-4a63-9e58-5320e07554e9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.263143 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-797cb9f85d-67nzc" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.290244 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.293139 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-797cb9f85d-67nzc"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.569754 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:28 crc kubenswrapper[4740]: E0216 12:58:28.570037 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570054 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570163 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" containerName="controller-manager" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.570597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.573896 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574271 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574294 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574375 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574638 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.574778 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.575661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.586372 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.700931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701018 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.701249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.802451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.803257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-client-ca\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.803935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-proxy-ca-bundles\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.804213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-config\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.806652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-serving-cert\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.818084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g286v\" (UniqueName: \"kubernetes.io/projected/f2681eeb-b24e-4bc4-ada3-f35b91e302bc-kube-api-access-g286v\") pod \"controller-manager-5b9866ffcd-z8ptb\" (UID: \"f2681eeb-b24e-4bc4-ada3-f35b91e302bc\") " pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:28 crc kubenswrapper[4740]: I0216 12:58:28.894627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.095146 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb"] Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" event={"ID":"f2681eeb-b24e-4bc4-ada3-f35b91e302bc","Type":"ContainerStarted","Data":"67f4b0884927a7ac6f28c18b0952c2a395c85ea5887aac6c26097e0b5ac10786"} Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268644 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" event={"ID":"f2681eeb-b24e-4bc4-ada3-f35b91e302bc","Type":"ContainerStarted","Data":"5aa610d805d137258a354c7b423e8461ce9c59e766c64e9de3a901e230c5683e"} Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.268848 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.270456 4740 patch_prober.go:28] interesting pod/controller-manager-5b9866ffcd-z8ptb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.270497 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" podUID="f2681eeb-b24e-4bc4-ada3-f35b91e302bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 16 12:58:29 crc kubenswrapper[4740]: I0216 12:58:29.293611 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b211397a-75e6-4a63-9e58-5320e07554e9" path="/var/lib/kubelet/pods/b211397a-75e6-4a63-9e58-5320e07554e9/volumes" Feb 16 12:58:30 crc kubenswrapper[4740]: I0216 12:58:30.278139 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" Feb 16 12:58:30 crc kubenswrapper[4740]: I0216 12:58:30.295234 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b9866ffcd-z8ptb" podStartSLOduration=4.295213278 podStartE2EDuration="4.295213278s" podCreationTimestamp="2026-02-16 12:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:29.289196448 +0000 UTC m=+336.665545169" watchObservedRunningTime="2026-02-16 12:58:30.295213278 +0000 UTC m=+337.671562009" Feb 16 12:58:45 crc kubenswrapper[4740]: I0216 12:58:45.575021 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:58:45 crc kubenswrapper[4740]: I0216 12:58:45.575802 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.441744 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.443138 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.467433 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602806 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.602888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.603026 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.603097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.621795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704530 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.704636 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.705634 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7abcf159-ee53-4f68-8e0d-aa863b58e081-ca-trust-extracted\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.706550 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-certificates\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.706584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7abcf159-ee53-4f68-8e0d-aa863b58e081-trusted-ca\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.710681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-registry-tls\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.712418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7abcf159-ee53-4f68-8e0d-aa863b58e081-installation-pull-secrets\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.722534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-bound-sa-token\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.724399 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2jd\" (UniqueName: \"kubernetes.io/projected/7abcf159-ee53-4f68-8e0d-aa863b58e081-kube-api-access-lr2jd\") pod \"image-registry-66df7c8f76-drs7f\" (UID: \"7abcf159-ee53-4f68-8e0d-aa863b58e081\") " pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:58 crc kubenswrapper[4740]: I0216 12:58:58.760294 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.153366 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-drs7f"] Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" event={"ID":"7abcf159-ee53-4f68-8e0d-aa863b58e081","Type":"ContainerStarted","Data":"ab8b0055f252dfd52267f967f66d04886a020a9111617d4cdc70ff990edab77e"} Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449466 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.449484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" event={"ID":"7abcf159-ee53-4f68-8e0d-aa863b58e081","Type":"ContainerStarted","Data":"d31fa87a46dbae15230a3bc84edd2309ec58c1cfbc407eff5eaa76b90ca87715"} Feb 16 12:58:59 crc kubenswrapper[4740]: I0216 12:58:59.465349 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" podStartSLOduration=1.465332846 podStartE2EDuration="1.465332846s" podCreationTimestamp="2026-02-16 12:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:58:59.464366326 +0000 UTC m=+366.840715057" watchObservedRunningTime="2026-02-16 12:58:59.465332846 +0000 UTC m=+366.841681567" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.067441 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.073881 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.081311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.082562 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236794 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236854 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.236891 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.254633 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.256618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.258709 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.260951 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.338912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-utilities\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.339182 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d0e942-91bf-460d-9465-2633c1436b2c-catalog-content\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.357949 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9bl\" (UniqueName: \"kubernetes.io/projected/b4d0e942-91bf-460d-9465-2633c1436b2c-kube-api-access-kq9bl\") pod \"redhat-operators-7j9d2\" (UID: \"b4d0e942-91bf-460d-9465-2633c1436b2c\") " pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.401570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439207 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.439365 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.540672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-catalog-content\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.541549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d9eb5f-5eed-4968-beae-0001d2d70d2a-utilities\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.558229 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz65x\" (UniqueName: \"kubernetes.io/projected/60d9eb5f-5eed-4968-beae-0001d2d70d2a-kube-api-access-lz65x\") pod \"certified-operators-xbn89\" (UID: \"60d9eb5f-5eed-4968-beae-0001d2d70d2a\") " pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.574028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.794645 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j9d2"] Feb 16 12:59:01 crc kubenswrapper[4740]: W0216 12:59:01.804413 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d0e942_91bf_460d_9465_2633c1436b2c.slice/crio-2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e WatchSource:0}: Error finding container 2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e: Status 404 returned error can't find the container with id 2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e Feb 16 12:59:01 crc kubenswrapper[4740]: I0216 12:59:01.962229 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbn89"] Feb 16 12:59:02 crc kubenswrapper[4740]: W0216 12:59:02.027499 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d9eb5f_5eed_4968_beae_0001d2d70d2a.slice/crio-014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9 WatchSource:0}: Error finding container 014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9: Status 404 returned error can't find the container with id 014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464157 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4d0e942-91bf-460d-9465-2633c1436b2c" containerID="4bda33e6f14cffbc36a62c5a996a64f5735a7b2297d687b038ca79c807709233" exitCode=0 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464267 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerDied","Data":"4bda33e6f14cffbc36a62c5a996a64f5735a7b2297d687b038ca79c807709233"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.464569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"2f0eeec1414fa8ccf3e3858f2875ac4ad83585738051dc216a4a8dafa072f29e"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468503 4740 generic.go:334] "Generic (PLEG): container finished" podID="60d9eb5f-5eed-4968-beae-0001d2d70d2a" containerID="edeaa38751ff8dea32d9594156dbb99c077dd0940322d603eef22f1a86631be1" exitCode=0 Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468530 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerDied","Data":"edeaa38751ff8dea32d9594156dbb99c077dd0940322d603eef22f1a86631be1"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.468572 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerStarted","Data":"014fe10003094e74c9d4903b18567037fd42e9619404f4290e10c57f99aa52f9"} Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.651521 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.655140 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.657783 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.658027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.755544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856190 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.856713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.875975 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"community-operators-czkjl\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:02 crc kubenswrapper[4740]: I0216 12:59:02.985317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.430716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.477315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921"} Feb 16 12:59:03 crc kubenswrapper[4740]: W0216 12:59:03.484106 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca213d9_ef6f_4240_aa95_fe7f4e2691cf.slice/crio-98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d WatchSource:0}: Error finding container 98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d: Status 404 returned error can't find the container with id 98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.489742 4740 generic.go:334] "Generic (PLEG): container finished" podID="60d9eb5f-5eed-4968-beae-0001d2d70d2a" containerID="6674ac7a76297138bb11087f1caca58071ad3df92ab4c9b530c89adfec966b55" exitCode=0 Feb 16 12:59:03 crc kubenswrapper[4740]: I0216 12:59:03.489784 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerDied","Data":"6674ac7a76297138bb11087f1caca58071ad3df92ab4c9b530c89adfec966b55"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.447738 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.448928 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.450397 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.458685 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.495510 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" exitCode=0 Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.495602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.496469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.498521 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4d0e942-91bf-460d-9465-2633c1436b2c" containerID="55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921" exitCode=0 Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.498636 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerDied","Data":"55630a7d95c7931350189da26c08d5118fd245a36186a327ebc0a11d26638921"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.501199 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbn89" event={"ID":"60d9eb5f-5eed-4968-beae-0001d2d70d2a","Type":"ContainerStarted","Data":"2b076548f55d0299dbecf70c82abd75bf9d182507be041a169846555b8f983cc"} Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.553871 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbn89" podStartSLOduration=1.848472873 podStartE2EDuration="3.553847316s" podCreationTimestamp="2026-02-16 12:59:01 +0000 UTC" firstStartedPulling="2026-02-16 12:59:02.470561821 +0000 UTC m=+369.846910552" lastFinishedPulling="2026-02-16 12:59:04.175936274 +0000 UTC m=+371.552284995" observedRunningTime="2026-02-16 12:59:04.549959353 +0000 UTC m=+371.926308084" watchObservedRunningTime="2026-02-16 12:59:04.553847316 +0000 UTC m=+371.930196037" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.578948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.579035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.579096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680465 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680520 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.680579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.681488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-catalog-content\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.681582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-utilities\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.697209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wnp\" (UniqueName: \"kubernetes.io/projected/6ecdfb1a-6379-4a42-a4c7-da582898b1f3-kube-api-access-85wnp\") pod \"redhat-marketplace-lv7b8\" (UID: \"6ecdfb1a-6379-4a42-a4c7-da582898b1f3\") " pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:04 crc kubenswrapper[4740]: I0216 12:59:04.766627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.179169 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lv7b8"] Feb 16 12:59:05 crc kubenswrapper[4740]: W0216 12:59:05.190080 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ecdfb1a_6379_4a42_a4c7_da582898b1f3.slice/crio-ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185 WatchSource:0}: Error finding container ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185: Status 404 returned error can't find the container with id ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185 Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506067 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ecdfb1a-6379-4a42-a4c7-da582898b1f3" containerID="b9681fc00c3849d4127c0f84c4c14c83ba8027d6a6d308437753c02d5a07fc74" exitCode=0 Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerDied","Data":"b9681fc00c3849d4127c0f84c4c14c83ba8027d6a6d308437753c02d5a07fc74"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.506148 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerStarted","Data":"ef07fddb16adf6ff0fcde79fd00003f37cc58ea87b7e487520a88cc0b4374185"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.509147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.530162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j9d2" event={"ID":"b4d0e942-91bf-460d-9465-2633c1436b2c","Type":"ContainerStarted","Data":"6b690e9acb7f917cedd3cbd4afc8d030e9fa6e80bf476388d76e9eaa038374e8"} Feb 16 12:59:05 crc kubenswrapper[4740]: I0216 12:59:05.569302 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j9d2" podStartSLOduration=2.104386698 podStartE2EDuration="4.569280662s" podCreationTimestamp="2026-02-16 12:59:01 +0000 UTC" firstStartedPulling="2026-02-16 12:59:02.467500174 +0000 UTC m=+369.843848905" lastFinishedPulling="2026-02-16 12:59:04.932394148 +0000 UTC m=+372.308742869" observedRunningTime="2026-02-16 12:59:05.569263232 +0000 UTC m=+372.945611963" watchObservedRunningTime="2026-02-16 12:59:05.569280662 +0000 UTC m=+372.945629383" Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.536596 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ecdfb1a-6379-4a42-a4c7-da582898b1f3" containerID="4ca9c7da352a1b4d8535a2b52f97e4e08077c4d693b6aff48cf3321b712cc493" exitCode=0 Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.536699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerDied","Data":"4ca9c7da352a1b4d8535a2b52f97e4e08077c4d693b6aff48cf3321b712cc493"} Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.540007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.540903 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" exitCode=0 Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.752392 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:06 crc kubenswrapper[4740]: I0216 12:59:06.752591 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" containerID="cri-o://a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" gracePeriod=30 Feb 16 12:59:06 crc kubenswrapper[4740]: E0216 12:59:06.820425 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaac701_1db9_48c2_9f15_61080e1c6389.slice/crio-conmon-a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832.scope\": RecentStats: unable to find data in memory cache]" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.139263 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319126 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.319306 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") pod \"5aaac701-1db9-48c2-9f15-61080e1c6389\" (UID: \"5aaac701-1db9-48c2-9f15-61080e1c6389\") " Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.320094 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca" (OuterVolumeSpecName: "client-ca") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.320411 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config" (OuterVolumeSpecName: "config") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.330070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.330111 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p" (OuterVolumeSpecName: "kube-api-access-mzh8p") pod "5aaac701-1db9-48c2-9f15-61080e1c6389" (UID: "5aaac701-1db9-48c2-9f15-61080e1c6389"). InnerVolumeSpecName "kube-api-access-mzh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420501 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-config\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420546 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aaac701-1db9-48c2-9f15-61080e1c6389-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420563 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh8p\" (UniqueName: \"kubernetes.io/projected/5aaac701-1db9-48c2-9f15-61080e1c6389-kube-api-access-mzh8p\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.420580 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5aaac701-1db9-48c2-9f15-61080e1c6389-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550451 4740 generic.go:334] "Generic (PLEG): container finished" podID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" exitCode=0 Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerDied","Data":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550530 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550551 4740 scope.go:117] "RemoveContainer" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.550540 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk" event={"ID":"5aaac701-1db9-48c2-9f15-61080e1c6389","Type":"ContainerDied","Data":"0935455f8a4446608878957a74eec09c507482f8f39c1e65a198b86d705e95ae"} Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.587679 4740 scope.go:117] "RemoveContainer" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: E0216 12:59:07.590769 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": container with ID starting with a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832 not found: ID does not exist" containerID="a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.592632 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832"} err="failed to get container status \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": rpc error: code = NotFound desc = could not find container \"a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832\": container with ID starting with a8f42fb1c9505c2137bc9fa976ab54272113df1a20f1dc22292dc5691d0df832 not found: ID does not exist" Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.597411 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:07 crc kubenswrapper[4740]: I0216 12:59:07.601610 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567b8dc5b4-8thgk"] Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.152623 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: E0216 12:59:08.153153 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153168 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153289 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" containerName="route-controller-manager" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.153735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.156900 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157093 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157252 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157552 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.157781 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.162469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.165415 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.332857 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.332919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.333003 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.333024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.434448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.435984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.436080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.436101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.435920 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-config\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.437427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df55ec5c-7923-476c-aaaf-722391d7d31d-client-ca\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.439870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df55ec5c-7923-476c-aaaf-722391d7d31d-serving-cert\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.461547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzc2\" (UniqueName: \"kubernetes.io/projected/df55ec5c-7923-476c-aaaf-722391d7d31d-kube-api-access-pkzc2\") pod \"route-controller-manager-77fc789b55-tkshs\" (UID: \"df55ec5c-7923-476c-aaaf-722391d7d31d\") " pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.472042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.568800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerStarted","Data":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.571343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lv7b8" event={"ID":"6ecdfb1a-6379-4a42-a4c7-da582898b1f3","Type":"ContainerStarted","Data":"6ea241b473697cd30c3c1ea849055a0cb8c3fb831551d7949b72f509bb45df94"} Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.587233 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-czkjl" podStartSLOduration=3.486601856 podStartE2EDuration="6.587217147s" podCreationTimestamp="2026-02-16 12:59:02 +0000 UTC" firstStartedPulling="2026-02-16 12:59:04.497720367 +0000 UTC m=+371.874069088" lastFinishedPulling="2026-02-16 12:59:07.598335658 +0000 UTC m=+374.974684379" observedRunningTime="2026-02-16 12:59:08.585269066 +0000 UTC m=+375.961617787" watchObservedRunningTime="2026-02-16 12:59:08.587217147 +0000 UTC m=+375.963565868" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.604992 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lv7b8" podStartSLOduration=2.459126729 podStartE2EDuration="4.604974607s" podCreationTimestamp="2026-02-16 12:59:04 +0000 UTC" firstStartedPulling="2026-02-16 12:59:05.50736132 +0000 UTC m=+372.883710041" lastFinishedPulling="2026-02-16 12:59:07.653209198 +0000 UTC m=+375.029557919" observedRunningTime="2026-02-16 12:59:08.603940844 +0000 UTC m=+375.980289575" watchObservedRunningTime="2026-02-16 12:59:08.604974607 +0000 UTC m=+375.981323328" Feb 16 12:59:08 crc kubenswrapper[4740]: I0216 12:59:08.888184 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs"] Feb 16 12:59:08 crc kubenswrapper[4740]: W0216 12:59:08.897023 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf55ec5c_7923_476c_aaaf_722391d7d31d.slice/crio-267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef WatchSource:0}: Error finding container 267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef: Status 404 returned error can't find the container with id 267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.287826 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aaac701-1db9-48c2-9f15-61080e1c6389" path="/var/lib/kubelet/pods/5aaac701-1db9-48c2-9f15-61080e1c6389/volumes" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" event={"ID":"df55ec5c-7923-476c-aaaf-722391d7d31d","Type":"ContainerStarted","Data":"8d93c3460ca29ef4cb976e1256043952d5580230008ff267f9c8add36b3f0eaa"} Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" event={"ID":"df55ec5c-7923-476c-aaaf-722391d7d31d","Type":"ContainerStarted","Data":"267bffcc76cce3a1ef40c4121c634769c9bc70e1c39ae6f6d6e422be4a2529ef"} Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.578575 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.583607 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" Feb 16 12:59:09 crc kubenswrapper[4740]: I0216 12:59:09.598958 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77fc789b55-tkshs" podStartSLOduration=3.598936317 podStartE2EDuration="3.598936317s" podCreationTimestamp="2026-02-16 12:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:59:09.594760115 +0000 UTC m=+376.971108856" watchObservedRunningTime="2026-02-16 12:59:09.598936317 +0000 UTC m=+376.975285048" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.402310 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.402940 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.448456 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.575481 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.575559 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.681524 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.697152 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j9d2" Feb 16 12:59:11 crc kubenswrapper[4740]: I0216 12:59:11.742617 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbn89" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:12.985738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:12.986059 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:13.061371 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:13 crc kubenswrapper[4740]: I0216 12:59:13.674569 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-czkjl" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.766946 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.766998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:14 crc kubenswrapper[4740]: I0216 12:59:14.810396 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.575743 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.576210 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.576285 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.577457 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.577581 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" gracePeriod=600 Feb 16 12:59:15 crc kubenswrapper[4740]: I0216 12:59:15.687942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lv7b8" Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.630575 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" exitCode=0 Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.630715 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4"} Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.631513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} Feb 16 12:59:16 crc kubenswrapper[4740]: I0216 12:59:16.631546 4740 scope.go:117] "RemoveContainer" containerID="2ea3bdd4bdd317eadd586a2142e2a43de92a9c0216ce52f6f8dc0bf5356d090b" Feb 16 12:59:18 crc kubenswrapper[4740]: I0216 12:59:18.770603 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-drs7f" Feb 16 12:59:18 crc kubenswrapper[4740]: I0216 12:59:18.841530 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:43 crc kubenswrapper[4740]: I0216 12:59:43.897755 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" containerID="cri-o://bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" gracePeriod=30 Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.275876 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366639 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366678 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366715 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.366753 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") pod \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\" (UID: \"56fbd3c7-a514-479c-9b0f-1cdb3025cae6\") " Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.367754 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.368199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373310 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373588 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.373976 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd" (OuterVolumeSpecName: "kube-api-access-vr6wd") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "kube-api-access-vr6wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.375451 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.375783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.384934 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "56fbd3c7-a514-479c-9b0f-1cdb3025cae6" (UID: "56fbd3c7-a514-479c-9b0f-1cdb3025cae6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468009 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468050 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468065 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468078 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468089 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468101 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.468113 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr6wd\" (UniqueName: \"kubernetes.io/projected/56fbd3c7-a514-479c-9b0f-1cdb3025cae6-kube-api-access-vr6wd\") on node \"crc\" DevicePath \"\"" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.806919 4740 generic.go:334] "Generic (PLEG): container finished" podID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" exitCode=0 Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerDied","Data":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807728 4740 scope.go:117] "RemoveContainer" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.807669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lkjkp" event={"ID":"56fbd3c7-a514-479c-9b0f-1cdb3025cae6","Type":"ContainerDied","Data":"1104556d5cde5c0aa4a407502225880f615d1c9eedcf19e3ada6ce6e63d3b266"} Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.830967 4740 scope.go:117] "RemoveContainer" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: E0216 12:59:44.833790 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": container with ID starting with bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da not found: ID does not exist" containerID="bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.833983 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da"} err="failed to get container status \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": rpc error: code = NotFound desc = could not find container \"bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da\": container with ID starting with bcb4bacdd207a23ac145e1b39a5673c9d9445825e42216f7e4c73a7e3fbfd6da not found: ID does not exist" Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.850173 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:44 crc kubenswrapper[4740]: I0216 12:59:44.856511 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lkjkp"] Feb 16 12:59:45 crc kubenswrapper[4740]: I0216 12:59:45.286130 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" path="/var/lib/kubelet/pods/56fbd3c7-a514-479c-9b0f-1cdb3025cae6/volumes" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.183225 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:00 crc kubenswrapper[4740]: E0216 13:00:00.183943 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.183957 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.184059 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fbd3c7-a514-479c-9b0f-1cdb3025cae6" containerName="registry" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.184428 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.186543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.187004 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.197996 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.271922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.272025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.272049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.373241 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.374102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.380122 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.391898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"collect-profiles-29520780-wmq9r\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.501632 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:00 crc kubenswrapper[4740]: I0216 13:00:00.936042 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906264 4740 generic.go:334] "Generic (PLEG): container finished" podID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerID="db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405" exitCode=0 Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906432 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerDied","Data":"db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405"} Feb 16 13:00:01 crc kubenswrapper[4740]: I0216 13:00:01.906630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerStarted","Data":"290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e"} Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.150634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218210 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.218235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") pod \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\" (UID: \"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb\") " Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.219159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.224449 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.225438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg" (OuterVolumeSpecName: "kube-api-access-5jfmg") pod "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" (UID: "1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb"). InnerVolumeSpecName "kube-api-access-5jfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319843 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319896 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jfmg\" (UniqueName: \"kubernetes.io/projected/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-kube-api-access-5jfmg\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.319914 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" event={"ID":"1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb","Type":"ContainerDied","Data":"290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e"} Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919086 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="290b315b65da51d40853ecb68d4c0084e8b605bd187376ad3d3532e80625ed4e" Feb 16 13:00:03 crc kubenswrapper[4740]: I0216 13:00:03.919093 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r" Feb 16 13:01:15 crc kubenswrapper[4740]: I0216 13:01:15.575173 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:01:15 crc kubenswrapper[4740]: I0216 13:01:15.576028 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:01:45 crc kubenswrapper[4740]: I0216 13:01:45.574930 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:01:45 crc kubenswrapper[4740]: I0216 13:01:45.575489 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.575428 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576037 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576088 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576766 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:02:15 crc kubenswrapper[4740]: I0216 13:02:15.576853 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" gracePeriod=600 Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.668236 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" exitCode=0 Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.668284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7"} Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.669028 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} Feb 16 13:02:16 crc kubenswrapper[4740]: I0216 13:02:16.669059 4740 scope.go:117] "RemoveContainer" containerID="b8d9d0bef39567853abfb9201ee335adfc510739be057c066c7ccc29a8a58ea4" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.629262 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:09 crc kubenswrapper[4740]: E0216 13:03:09.630103 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630121 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630247 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" containerName="collect-profiles" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.630710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.632620 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.632719 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2k78b" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.633841 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.636564 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.637395 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.638887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bgvh8" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.653342 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.670383 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.671170 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.674189 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c82xf" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.676045 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.683149 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.785191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886625 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.886720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.906672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lc82\" (UniqueName: \"kubernetes.io/projected/beeada69-65c5-434a-af02-8e6b23e13138-kube-api-access-5lc82\") pod \"cert-manager-cainjector-cf98fcc89-hpjbh\" (UID: \"beeada69-65c5-434a-af02-8e6b23e13138\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.908391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxb6l\" (UniqueName: \"kubernetes.io/projected/8b35e0e1-44f6-4481-a71e-98e3f8462bb7-kube-api-access-cxb6l\") pod \"cert-manager-858654f9db-kflg5\" (UID: \"8b35e0e1-44f6-4481-a71e-98e3f8462bb7\") " pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.911083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46864\" (UniqueName: \"kubernetes.io/projected/a68020b3-17ff-43dc-b17d-0845940c0758-kube-api-access-46864\") pod \"cert-manager-webhook-687f57d79b-25fnr\" (UID: \"a68020b3-17ff-43dc-b17d-0845940c0758\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.962312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.970006 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kflg5" Feb 16 13:03:09 crc kubenswrapper[4740]: I0216 13:03:09.990493 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.373783 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kflg5"] Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.383650 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.417626 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25fnr"] Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.420302 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh"] Feb 16 13:03:10 crc kubenswrapper[4740]: W0216 13:03:10.422656 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeeada69_65c5_434a_af02_8e6b23e13138.slice/crio-ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de WatchSource:0}: Error finding container ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de: Status 404 returned error can't find the container with id ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de Feb 16 13:03:10 crc kubenswrapper[4740]: W0216 13:03:10.427003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda68020b3_17ff_43dc_b17d_0845940c0758.slice/crio-3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3 WatchSource:0}: Error finding container 3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3: Status 404 returned error can't find the container with id 3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3 Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.997805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" event={"ID":"beeada69-65c5-434a-af02-8e6b23e13138","Type":"ContainerStarted","Data":"ee3de3ab33c657a0cccf21a60e2e83a31db365bb4c85c5a6aac5314fbf4b89de"} Feb 16 13:03:10 crc kubenswrapper[4740]: I0216 13:03:10.999077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kflg5" event={"ID":"8b35e0e1-44f6-4481-a71e-98e3f8462bb7","Type":"ContainerStarted","Data":"f88db5297ec631ac828a43f4dcb16e8b31b6299d219ee16e91a4ed0eac3a62cd"} Feb 16 13:03:11 crc kubenswrapper[4740]: I0216 13:03:11.000108 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" event={"ID":"a68020b3-17ff-43dc-b17d-0845940c0758","Type":"ContainerStarted","Data":"3fa6f37bfb7f56461f094fe9ae1c518a5f293c06febd6c4eed68866d35462cf3"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.017559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kflg5" event={"ID":"8b35e0e1-44f6-4481-a71e-98e3f8462bb7","Type":"ContainerStarted","Data":"557e212c7284a8e868eb014f3db352425d2b82aaf15788cbfe05682a7e9cf678"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.019386 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" event={"ID":"a68020b3-17ff-43dc-b17d-0845940c0758","Type":"ContainerStarted","Data":"b61fe2893ec93647eb8b9b0b95599eab9997ea713d93302e8ee7d81b46ddceb2"} Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.019526 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.034017 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kflg5" podStartSLOduration=2.213561926 podStartE2EDuration="5.033997481s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.383247705 +0000 UTC m=+617.759596436" lastFinishedPulling="2026-02-16 13:03:13.20368327 +0000 UTC m=+620.580031991" observedRunningTime="2026-02-16 13:03:14.03214425 +0000 UTC m=+621.408492971" watchObservedRunningTime="2026-02-16 13:03:14.033997481 +0000 UTC m=+621.410346212" Feb 16 13:03:14 crc kubenswrapper[4740]: I0216 13:03:14.057991 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" podStartSLOduration=2.274933787 podStartE2EDuration="5.057949742s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.429257632 +0000 UTC m=+617.805606343" lastFinishedPulling="2026-02-16 13:03:13.212273577 +0000 UTC m=+620.588622298" observedRunningTime="2026-02-16 13:03:14.053222043 +0000 UTC m=+621.429570764" watchObservedRunningTime="2026-02-16 13:03:14.057949742 +0000 UTC m=+621.434298473" Feb 16 13:03:15 crc kubenswrapper[4740]: I0216 13:03:15.030194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" event={"ID":"beeada69-65c5-434a-af02-8e6b23e13138","Type":"ContainerStarted","Data":"7d5abc7a16c492df2a5eaa32a803f4d79c5729f029f7a4c6618715209a325da3"} Feb 16 13:03:15 crc kubenswrapper[4740]: I0216 13:03:15.045482 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hpjbh" podStartSLOduration=2.512772826 podStartE2EDuration="6.045464191s" podCreationTimestamp="2026-02-16 13:03:09 +0000 UTC" firstStartedPulling="2026-02-16 13:03:10.425088502 +0000 UTC m=+617.801437223" lastFinishedPulling="2026-02-16 13:03:13.957779867 +0000 UTC m=+621.334128588" observedRunningTime="2026-02-16 13:03:15.043282139 +0000 UTC m=+622.419630860" watchObservedRunningTime="2026-02-16 13:03:15.045464191 +0000 UTC m=+622.421812912" Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.615266 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620019 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" containerID="cri-o://9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620199 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" containerID="cri-o://db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620400 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" containerID="cri-o://85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620456 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" containerID="cri-o://845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620524 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" containerID="cri-o://459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620415 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" containerID="cri-o://f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.620936 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.663411 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" containerID="cri-o://0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" gracePeriod=30 Feb 16 13:03:19 crc kubenswrapper[4740]: E0216 13:03:19.949609 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4734b9dd_f672_4895_86b3_538d9012af9f.slice/crio-845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:03:19 crc kubenswrapper[4740]: I0216 13:03:19.995790 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-25fnr" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.060305 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061120 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/1.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061169 4740 generic.go:334] "Generic (PLEG): container finished" podID="21f981d4-46dd-4bb5-b244-aaf603008c5e" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" exitCode=2 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerDied","Data":"20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.061303 4740 scope.go:117] "RemoveContainer" containerID="f9a98ba5b3da9a538c17af7f1a71a9a80bebadb8fb4d28676250f4be257f7abb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.062021 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.062242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.068190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovnkube-controller/3.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.071368 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.071984 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072944 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072983 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.072998 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073010 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073021 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073034 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" exitCode=0 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073044 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" exitCode=143 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073054 4740 generic.go:334] "Generic (PLEG): container finished" podID="4734b9dd-f672-4895-86b3-538d9012af9f" containerID="db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" exitCode=143 Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073128 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.073180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992"} Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.117952 4740 scope.go:117] "RemoveContainer" containerID="169bb4a33c6f7e3272b4a644c4bed55c18ee1d75224c0e9e4c9272276932539c" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.314547 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.315122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.315536 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379386 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rglx7"] Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379847 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379878 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379897 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379909 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379923 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379934 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379946 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379957 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.379971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.379984 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380002 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380025 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380039 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380049 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380072 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380086 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380095 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380112 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380123 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380136 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kubecfg-setup" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380147 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kubecfg-setup" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380169 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380180 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380345 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380359 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380374 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-acl-logging" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380393 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="kube-rbac-proxy-node" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380408 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380423 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="northd" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380433 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="nbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380444 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="sbdb" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovn-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380471 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380481 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: E0216 13:03:20.380596 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380607 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.380731 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" containerName="ovnkube-controller" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.384661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430333 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430355 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430387 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430418 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430461 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430477 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430502 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430535 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430568 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430606 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430645 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430687 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430711 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430751 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430905 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430952 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4734b9dd-f672-4895-86b3-538d9012af9f\" (UID: \"4734b9dd-f672-4895-86b3-538d9012af9f\") " Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.430459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431068 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket" (OuterVolumeSpecName: "log-socket") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431020 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log" (OuterVolumeSpecName: "node-log") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431054 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431130 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431182 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash" (OuterVolumeSpecName: "host-slash") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431276 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431310 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431427 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431524 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431542 4740 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431554 4740 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431567 4740 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431581 4740 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431596 4740 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431581 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431608 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431621 4740 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431727 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431744 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431887 4740 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431927 4740 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431944 4740 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.431957 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.432066 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.436184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w" (OuterVolumeSpecName: "kube-api-access-rml5w") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "kube-api-access-rml5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.436592 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.444318 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4734b9dd-f672-4895-86b3-538d9012af9f" (UID: "4734b9dd-f672-4895-86b3-538d9012af9f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533646 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533694 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533713 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533779 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533966 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.533991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534171 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534257 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534357 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534370 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rml5w\" (UniqueName: \"kubernetes.io/projected/4734b9dd-f672-4895-86b3-538d9012af9f-kube-api-access-rml5w\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534382 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4734b9dd-f672-4895-86b3-538d9012af9f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534391 4740 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534401 4740 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534411 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4734b9dd-f672-4895-86b3-538d9012af9f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.534423 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4734b9dd-f672-4895-86b3-538d9012af9f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635764 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635925 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.635984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636077 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-slash\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636104 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-netns\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636103 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-run-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-ovn\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-node-log\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636219 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-bin\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-etc-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636226 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-cni-netd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-systemd-units\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-systemd\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636381 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636455 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-run-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-log-socket\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636515 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.636651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-var-lib-openvswitch\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-env-overrides\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-config\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/821af362-e357-43e9-86e5-259cef9b4a63-ovnkube-script-lib\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.637548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/821af362-e357-43e9-86e5-259cef9b4a63-host-kubelet\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.639170 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/821af362-e357-43e9-86e5-259cef9b4a63-ovn-node-metrics-cert\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.655142 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85w9\" (UniqueName: \"kubernetes.io/projected/821af362-e357-43e9-86e5-259cef9b4a63-kube-api-access-s85w9\") pod \"ovnkube-node-rglx7\" (UID: \"821af362-e357-43e9-86e5-259cef9b4a63\") " pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:20 crc kubenswrapper[4740]: I0216 13:03:20.698707 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.083574 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086493 4740 generic.go:334] "Generic (PLEG): container finished" podID="821af362-e357-43e9-86e5-259cef9b4a63" containerID="6640a5f734cd430cdfd59b39aab5a672ae67e177c3184774f33cb2d3d1771d6a" exitCode=0 Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerDied","Data":"6640a5f734cd430cdfd59b39aab5a672ae67e177c3184774f33cb2d3d1771d6a"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.086613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"6dbf36e35751a26aa9ae9b347ef9788d0cd2eff5d34cf538a43699aafb28e9cd"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.092473 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-acl-logging/0.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-msmgh_4734b9dd-f672-4895-86b3-538d9012af9f/ovn-controller/0.log" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093471 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" event={"ID":"4734b9dd-f672-4895-86b3-538d9012af9f","Type":"ContainerDied","Data":"eb66f3d2b37f21fe7aa111a136026ce7eb2cec2307821fe7b198f1e6beb272ce"} Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093510 4740 scope.go:117] "RemoveContainer" containerID="0a6a80333534e58b90885aab282874ad3931ca7d2826046ac167da650d7e688d" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.093627 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-msmgh" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.122070 4740 scope.go:117] "RemoveContainer" containerID="85ce324537508fb227a540f66a176804e6167355bd256a61e08da2d189689b3a" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.151000 4740 scope.go:117] "RemoveContainer" containerID="845bffdbf6f120353e2a6b49a28e3b3030370c496c6b39e6536ba8039b1c1356" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.152313 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.155795 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-msmgh"] Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.170220 4740 scope.go:117] "RemoveContainer" containerID="459075bd1e031d8028fabb1d0bda6d14e6d424a390dbc5bb5899f503665812f0" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.193929 4740 scope.go:117] "RemoveContainer" containerID="1bc8638b863d36d3fbdf5964244a5bd9641b84f1ace5401bc712f86de785a8bf" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.219889 4740 scope.go:117] "RemoveContainer" containerID="9d51a209bb10f62ab0a08b66b681443f9ccafdaf637e3cc90eb3637be4186cf4" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.236925 4740 scope.go:117] "RemoveContainer" containerID="f6722e12da3b100277e4d42a09b47fbe2bb4c0a2afaa821c8595fad3b0f63ed3" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.255488 4740 scope.go:117] "RemoveContainer" containerID="db3cbb707b998fa3f1e401bb0ae1d94a74ebdeb0d314c7f69ef26d9c85483992" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.275085 4740 scope.go:117] "RemoveContainer" containerID="aa84c58426f16785e23f6726ef9c00ff0a3ec0cf482db30dba1e51181476f2bb" Feb 16 13:03:21 crc kubenswrapper[4740]: I0216 13:03:21.288906 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4734b9dd-f672-4895-86b3-538d9012af9f" path="/var/lib/kubelet/pods/4734b9dd-f672-4895-86b3-538d9012af9f/volumes" Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.104566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"3c822ef5edc784c2ed122b83091dcc838535d1f2ea973cfcebe6333e909032a7"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"0b2cf127fc76d5f211a82c80631be91aa8fed66cd8d416bce0603ccf11a098bb"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"1b599c8df447c38b19dee6f7763be37b308a6bea8f6131e02de6065a460c56d4"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"780ba962a9087e0d5489e81dd729f5b9f3268de2902452e5e1cbf4e5a1abf2be"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105300 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"4015b96cbdd41097647f4ac64dbcc061ea2e3211358b63803d279d32e84865f4"} Feb 16 13:03:22 crc kubenswrapper[4740]: I0216 13:03:22.105317 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"1c2a34fb4883ba1ef50823773f6fb33e2dddf0a0c5997c273d5ef93eeaf706e8"} Feb 16 13:03:25 crc kubenswrapper[4740]: I0216 13:03:25.125935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"8eebd8d9e91db7b57983954a1dcd5eb9296a588096c359e650b022f62936a0bf"} Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.145951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" event={"ID":"821af362-e357-43e9-86e5-259cef9b4a63","Type":"ContainerStarted","Data":"c6e7305ecf76c7668b1b225e0e04e911c6136fa17328d03b9aee282db663e924"} Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146500 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146544 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.146554 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.170881 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.182602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" podStartSLOduration=7.182579109 podStartE2EDuration="7.182579109s" podCreationTimestamp="2026-02-16 13:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:03:27.179572699 +0000 UTC m=+634.555921420" watchObservedRunningTime="2026-02-16 13:03:27.182579109 +0000 UTC m=+634.558927830" Feb 16 13:03:27 crc kubenswrapper[4740]: I0216 13:03:27.191091 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:31 crc kubenswrapper[4740]: I0216 13:03:31.281317 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:31 crc kubenswrapper[4740]: E0216 13:03:31.282010 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-v88dn_openshift-multus(21f981d4-46dd-4bb5-b244-aaf603008c5e)\"" pod="openshift-multus/multus-v88dn" podUID="21f981d4-46dd-4bb5-b244-aaf603008c5e" Feb 16 13:03:45 crc kubenswrapper[4740]: I0216 13:03:45.281243 4740 scope.go:117] "RemoveContainer" containerID="20422386d339e37ad28434bbaa9f3e411c93a6615f99b7e36c75d19d9a2a166c" Feb 16 13:03:46 crc kubenswrapper[4740]: I0216 13:03:46.267204 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v88dn_21f981d4-46dd-4bb5-b244-aaf603008c5e/kube-multus/2.log" Feb 16 13:03:46 crc kubenswrapper[4740]: I0216 13:03:46.267838 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v88dn" event={"ID":"21f981d4-46dd-4bb5-b244-aaf603008c5e","Type":"ContainerStarted","Data":"97efa07b84aaee645f78c1f71ef09129a178f2d4e53e1afc73affcf68d389413"} Feb 16 13:03:50 crc kubenswrapper[4740]: I0216 13:03:50.724344 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rglx7" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.527632 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.530513 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.532731 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.536009 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609335 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609393 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.609447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710746 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.710929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.711459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.742582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:57 crc kubenswrapper[4740]: I0216 13:03:57.850410 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.032702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp"] Feb 16 13:03:58 crc kubenswrapper[4740]: W0216 13:03:58.044184 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e36b7f7_a888_4da4_a510_deafe9588b20.slice/crio-7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1 WatchSource:0}: Error finding container 7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1: Status 404 returned error can't find the container with id 7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1 Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.446058 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerStarted","Data":"03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3"} Feb 16 13:03:58 crc kubenswrapper[4740]: I0216 13:03:58.446133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerStarted","Data":"7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1"} Feb 16 13:03:59 crc kubenswrapper[4740]: I0216 13:03:59.451633 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3" exitCode=0 Feb 16 13:03:59 crc kubenswrapper[4740]: I0216 13:03:59.451674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"03ad00eae0a3a71c4297225458652dfc1ddde72c80091a97bf98cb7899dc9ee3"} Feb 16 13:04:01 crc kubenswrapper[4740]: I0216 13:04:01.467903 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="922f3a2873a321b2c72d72f908bfd984a86e9d1eb495494228363388d2e29678" exitCode=0 Feb 16 13:04:01 crc kubenswrapper[4740]: I0216 13:04:01.468090 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"922f3a2873a321b2c72d72f908bfd984a86e9d1eb495494228363388d2e29678"} Feb 16 13:04:02 crc kubenswrapper[4740]: I0216 13:04:02.477326 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerID="1f3a8e935017b1d1943e3f7c5358a0c1111f6c1f96fbbbbbac3d28e72fb51f3d" exitCode=0 Feb 16 13:04:02 crc kubenswrapper[4740]: I0216 13:04:02.477385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"1f3a8e935017b1d1943e3f7c5358a0c1111f6c1f96fbbbbbac3d28e72fb51f3d"} Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.720237 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.783847 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.783937 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.784604 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle" (OuterVolumeSpecName: "bundle") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.784990 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") pod \"4e36b7f7-a888-4da4-a510-deafe9588b20\" (UID: \"4e36b7f7-a888-4da4-a510-deafe9588b20\") " Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.785326 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.789428 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd" (OuterVolumeSpecName: "kube-api-access-465qd") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "kube-api-access-465qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.885914 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-465qd\" (UniqueName: \"kubernetes.io/projected/4e36b7f7-a888-4da4-a510-deafe9588b20-kube-api-access-465qd\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.945186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util" (OuterVolumeSpecName: "util") pod "4e36b7f7-a888-4da4-a510-deafe9588b20" (UID: "4e36b7f7-a888-4da4-a510-deafe9588b20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:03 crc kubenswrapper[4740]: I0216 13:04:03.987087 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e36b7f7-a888-4da4-a510-deafe9588b20-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493641 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" event={"ID":"4e36b7f7-a888-4da4-a510-deafe9588b20","Type":"ContainerDied","Data":"7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1"} Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493685 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7d768fb370b5a7ebb05d4c660b581420506118b53656a0ce1aee743c6f58e1" Feb 16 13:04:04 crc kubenswrapper[4740]: I0216 13:04:04.493701 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.069667 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070303 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="pull" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070321 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="pull" Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070334 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="util" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070341 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="util" Feb 16 13:04:09 crc kubenswrapper[4740]: E0216 13:04:09.070357 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070364 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070477 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e36b7f7-a888-4da4-a510-deafe9588b20" containerName="extract" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.070953 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.074524 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.074887 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-56rgw" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.075509 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.087457 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.254112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.355544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.374640 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55z87\" (UniqueName: \"kubernetes.io/projected/afdcb81a-db2a-4c04-b73b-30facf2d10af-kube-api-access-55z87\") pod \"nmstate-operator-694c9596b7-76m6k\" (UID: \"afdcb81a-db2a-4c04-b73b-30facf2d10af\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.395515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" Feb 16 13:04:09 crc kubenswrapper[4740]: I0216 13:04:09.598131 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-76m6k"] Feb 16 13:04:10 crc kubenswrapper[4740]: I0216 13:04:10.528353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" event={"ID":"afdcb81a-db2a-4c04-b73b-30facf2d10af","Type":"ContainerStarted","Data":"f059097902024f8242f4fd49cdb4b266f51749ca504b0ce23c35ea5cec4b476f"} Feb 16 13:04:12 crc kubenswrapper[4740]: I0216 13:04:12.539596 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" event={"ID":"afdcb81a-db2a-4c04-b73b-30facf2d10af","Type":"ContainerStarted","Data":"cea357629bf18f5604596868e8b4353bceecb56b373d61ebe68ec2c4a9831df4"} Feb 16 13:04:15 crc kubenswrapper[4740]: I0216 13:04:15.575143 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:04:15 crc kubenswrapper[4740]: I0216 13:04:15.575564 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.844264 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-76m6k" podStartSLOduration=6.9764121 podStartE2EDuration="8.844248009s" podCreationTimestamp="2026-02-16 13:04:09 +0000 UTC" firstStartedPulling="2026-02-16 13:04:09.615758166 +0000 UTC m=+676.992106877" lastFinishedPulling="2026-02-16 13:04:11.483594065 +0000 UTC m=+678.859942786" observedRunningTime="2026-02-16 13:04:12.553557178 +0000 UTC m=+679.929905909" watchObservedRunningTime="2026-02-16 13:04:17.844248009 +0000 UTC m=+685.220596730" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.846189 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.846985 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.849060 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfmc4" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.855950 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.857131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.858754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.866073 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.877561 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.886722 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v88gn"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.887419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965141 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.965313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.982420 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.983116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986091 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cnlh8" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.986409 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 13:04:17 crc kubenswrapper[4740]: I0216 13:04:17.998378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067273 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.067320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.083427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pvr\" (UniqueName: \"kubernetes.io/projected/b7ffd056-af44-4007-8de6-cc707902d4c4-kube-api-access-v6pvr\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.084185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvsv\" (UniqueName: \"kubernetes.io/projected/58a2ae40-4e01-43af-907b-7e91246277ea-kube-api-access-gqvsv\") pod \"nmstate-metrics-58c85c668d-g5mkh\" (UID: \"58a2ae40-4e01-43af-907b-7e91246277ea\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.086864 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b7ffd056-af44-4007-8de6-cc707902d4c4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-r9sw6\" (UID: \"b7ffd056-af44-4007-8de6-cc707902d4c4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.168780 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.168957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169176 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-nmstate-lock\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-dbus-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c0ee084-492b-46da-82b3-9c9a8e1715fd-ovs-socket\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.169328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.176617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.183280 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.184282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5v6\" (UniqueName: \"kubernetes.io/projected/3c0ee084-492b-46da-82b3-9c9a8e1715fd-kube-api-access-zl5v6\") pod \"nmstate-handler-v88gn\" (UID: \"3c0ee084-492b-46da-82b3-9c9a8e1715fd\") " pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.200979 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.201760 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.210590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.214626 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270696 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.270766 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.272456 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edcdba40-6318-4d29-a235-829e94bc8089-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.282589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edcdba40-6318-4d29-a235-829e94bc8089-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.291290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bqh\" (UniqueName: \"kubernetes.io/projected/edcdba40-6318-4d29-a235-829e94bc8089-kube-api-access-f7bqh\") pod \"nmstate-console-plugin-5c78fc5d65-nrnvc\" (UID: \"edcdba40-6318-4d29-a235-829e94bc8089\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.297036 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371749 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371770 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371831 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.371858 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.410770 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6"] Feb 16 13:04:18 crc kubenswrapper[4740]: W0216 13:04:18.418568 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7ffd056_af44_4007_8de6_cc707902d4c4.slice/crio-a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e WatchSource:0}: Error finding container a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e: Status 404 returned error can't find the container with id a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.448679 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh"] Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.473442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474775 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-service-ca\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-console-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.474666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-trusted-ca-bundle\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.475539 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f22e5359-95ad-4163-8f93-88353190b805-oauth-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.478943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-oauth-config\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.479578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22e5359-95ad-4163-8f93-88353190b805-console-serving-cert\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.498906 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c5k\" (UniqueName: \"kubernetes.io/projected/f22e5359-95ad-4163-8f93-88353190b805-kube-api-access-p4c5k\") pod \"console-65dcb9588c-cxtv2\" (UID: \"f22e5359-95ad-4163-8f93-88353190b805\") " pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.507496 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc"] Feb 16 13:04:18 crc kubenswrapper[4740]: W0216 13:04:18.512176 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedcdba40_6318_4d29_a235_829e94bc8089.slice/crio-038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a WatchSource:0}: Error finding container 038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a: Status 404 returned error can't find the container with id 038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.554268 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.571628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v88gn" event={"ID":"3c0ee084-492b-46da-82b3-9c9a8e1715fd","Type":"ContainerStarted","Data":"4965bbe93b5792a12256e17dfd65b00618ca5c089bfc38a19dec37712ee87d4c"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.572693 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" event={"ID":"b7ffd056-af44-4007-8de6-cc707902d4c4","Type":"ContainerStarted","Data":"a1baadd7219fdeddcbe3d56986616ff267e4614958f73d06e9e2b22511e1535e"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.573418 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" event={"ID":"edcdba40-6318-4d29-a235-829e94bc8089","Type":"ContainerStarted","Data":"038a0fca574acf147a9a05e286e53f706e038fcc6253340b5a2d16d00735820a"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.574453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"dd28d0ee016e225ca7e94bb019049e46e5c942ec5dc794f21927a50e28981eb9"} Feb 16 13:04:18 crc kubenswrapper[4740]: I0216 13:04:18.715067 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65dcb9588c-cxtv2"] Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.581878 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-cxtv2" event={"ID":"f22e5359-95ad-4163-8f93-88353190b805","Type":"ContainerStarted","Data":"e0f7746addb599768b22590261559cf5667a05189a669384707bae343aabdc38"} Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.582220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65dcb9588c-cxtv2" event={"ID":"f22e5359-95ad-4163-8f93-88353190b805","Type":"ContainerStarted","Data":"909dfc8245a0d1a1655cd87968208d3e76c879417d61d66e9cc323af1ee2b5e2"} Feb 16 13:04:19 crc kubenswrapper[4740]: I0216 13:04:19.608097 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65dcb9588c-cxtv2" podStartSLOduration=1.608073628 podStartE2EDuration="1.608073628s" podCreationTimestamp="2026-02-16 13:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:04:19.604174221 +0000 UTC m=+686.980522972" watchObservedRunningTime="2026-02-16 13:04:19.608073628 +0000 UTC m=+686.984422349" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.594274 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v88gn" event={"ID":"3c0ee084-492b-46da-82b3-9c9a8e1715fd","Type":"ContainerStarted","Data":"3aee1cd43e3a6cc6060fd671a21f7b15a2286d84ebe4c5bc5055c0a1630177f5"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.594880 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.598081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" event={"ID":"b7ffd056-af44-4007-8de6-cc707902d4c4","Type":"ContainerStarted","Data":"d17a8aea7361581a915e6fb6854d05a7cce92a74b1e26cff948fbc8ec764c904"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.598296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.600413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" event={"ID":"edcdba40-6318-4d29-a235-829e94bc8089","Type":"ContainerStarted","Data":"7c4d9f98ef1c1089cbee1931cb50b4becee4db3519f6739aeb6a86707d92bc32"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.602686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"96c4ca7426706ccd2b5c167f7da729785d3f75f23ef7537a92217bf24880f9e4"} Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.616081 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v88gn" podStartSLOduration=1.711942467 podStartE2EDuration="4.616063195s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.297432332 +0000 UTC m=+685.673781053" lastFinishedPulling="2026-02-16 13:04:21.20155304 +0000 UTC m=+688.577901781" observedRunningTime="2026-02-16 13:04:21.612073415 +0000 UTC m=+688.988422136" watchObservedRunningTime="2026-02-16 13:04:21.616063195 +0000 UTC m=+688.992411916" Feb 16 13:04:21 crc kubenswrapper[4740]: I0216 13:04:21.629539 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" podStartSLOduration=1.835698286 podStartE2EDuration="4.629522166s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.421292185 +0000 UTC m=+685.797640906" lastFinishedPulling="2026-02-16 13:04:21.215116035 +0000 UTC m=+688.591464786" observedRunningTime="2026-02-16 13:04:21.626336162 +0000 UTC m=+689.002684893" watchObservedRunningTime="2026-02-16 13:04:21.629522166 +0000 UTC m=+689.005870887" Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.306308 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nrnvc" podStartSLOduration=3.626233031 podStartE2EDuration="6.30629002s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.514318296 +0000 UTC m=+685.890667017" lastFinishedPulling="2026-02-16 13:04:21.194375245 +0000 UTC m=+688.570724006" observedRunningTime="2026-02-16 13:04:21.643139243 +0000 UTC m=+689.019487964" watchObservedRunningTime="2026-02-16 13:04:23.30629002 +0000 UTC m=+690.682638741" Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.616191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" event={"ID":"58a2ae40-4e01-43af-907b-7e91246277ea","Type":"ContainerStarted","Data":"85f883a5f53bf9f7e69be0fc35a9513ead9b09563524871a9a8c28f816dd3dcb"} Feb 16 13:04:23 crc kubenswrapper[4740]: I0216 13:04:23.632509 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-g5mkh" podStartSLOduration=1.616540968 podStartE2EDuration="6.632485179s" podCreationTimestamp="2026-02-16 13:04:17 +0000 UTC" firstStartedPulling="2026-02-16 13:04:18.457282755 +0000 UTC m=+685.833631476" lastFinishedPulling="2026-02-16 13:04:23.473226966 +0000 UTC m=+690.849575687" observedRunningTime="2026-02-16 13:04:23.629269453 +0000 UTC m=+691.005618194" watchObservedRunningTime="2026-02-16 13:04:23.632485179 +0000 UTC m=+691.008833910" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.237568 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v88gn" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.555193 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.555262 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.563009 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.645036 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65dcb9588c-cxtv2" Feb 16 13:04:28 crc kubenswrapper[4740]: I0216 13:04:28.695444 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:38 crc kubenswrapper[4740]: I0216 13:04:38.193798 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-r9sw6" Feb 16 13:04:45 crc kubenswrapper[4740]: I0216 13:04:45.575434 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:04:45 crc kubenswrapper[4740]: I0216 13:04:45.576559 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.113908 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.118804 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.121319 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.127397 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230755 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230870 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.230906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333027 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.333705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.334169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.354872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.440763 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:51 crc kubenswrapper[4740]: I0216 13:04:51.896226 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk"] Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783301 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="dd0de84e56c9255236e11b685dd1fb2353e37f74f2d50abeddb8e9ac3617840b" exitCode=0 Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"dd0de84e56c9255236e11b685dd1fb2353e37f74f2d50abeddb8e9ac3617840b"} Feb 16 13:04:52 crc kubenswrapper[4740]: I0216 13:04:52.783607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerStarted","Data":"38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116"} Feb 16 13:04:53 crc kubenswrapper[4740]: I0216 13:04:53.518382 4740 scope.go:117] "RemoveContainer" containerID="0f596f60cfc20df2fe6f9a50eaee6cbff74e73a1ac2673a361d75b584fd10bfd" Feb 16 13:04:53 crc kubenswrapper[4740]: I0216 13:04:53.761432 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gctsd" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" containerID="cri-o://ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" gracePeriod=15 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.188862 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gctsd_adc3a749-7453-4afe-ba48-f34188be4832/console/0.log" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.189188 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268790 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268898 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.268926 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269028 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269061 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") pod \"adc3a749-7453-4afe-ba48-f34188be4832\" (UID: \"adc3a749-7453-4afe-ba48-f34188be4832\") " Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca" (OuterVolumeSpecName: "service-ca") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.269875 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.270130 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config" (OuterVolumeSpecName: "console-config") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.274916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.275344 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp" (OuterVolumeSpecName: "kube-api-access-2sslp") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "kube-api-access-2sslp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.277434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "adc3a749-7453-4afe-ba48-f34188be4832" (UID: "adc3a749-7453-4afe-ba48-f34188be4832"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.371488 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.371864 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372050 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372263 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sslp\" (UniqueName: \"kubernetes.io/projected/adc3a749-7453-4afe-ba48-f34188be4832-kube-api-access-2sslp\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372389 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372591 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adc3a749-7453-4afe-ba48-f34188be4832-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.372861 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adc3a749-7453-4afe-ba48-f34188be4832-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797033 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gctsd_adc3a749-7453-4afe-ba48-f34188be4832/console/0.log" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797349 4740 generic.go:334] "Generic (PLEG): container finished" podID="adc3a749-7453-4afe-ba48-f34188be4832" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" exitCode=2 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797432 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerDied","Data":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797495 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gctsd" event={"ID":"adc3a749-7453-4afe-ba48-f34188be4832","Type":"ContainerDied","Data":"531ee6088e028abeb40db4014fff58f47925cdba0b3674ddf9755268d1aa83d4"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797520 4740 scope.go:117] "RemoveContainer" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.797432 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gctsd" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.804477 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="9b44d3ac65230daec50392ab807602372c0a2c24b01d13ecd8738da87cac0fc1" exitCode=0 Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.804516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"9b44d3ac65230daec50392ab807602372c0a2c24b01d13ecd8738da87cac0fc1"} Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.826764 4740 scope.go:117] "RemoveContainer" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: E0216 13:04:54.827308 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": container with ID starting with ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1 not found: ID does not exist" containerID="ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.827592 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1"} err="failed to get container status \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": rpc error: code = NotFound desc = could not find container \"ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1\": container with ID starting with ebf0a78c4740585e6491511c2a969735aaf1efb913d2a5fcfce0fdc7894cd9d1 not found: ID does not exist" Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.839383 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:54 crc kubenswrapper[4740]: I0216 13:04:54.842688 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gctsd"] Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.295726 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc3a749-7453-4afe-ba48-f34188be4832" path="/var/lib/kubelet/pods/adc3a749-7453-4afe-ba48-f34188be4832/volumes" Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.814500 4740 generic.go:334] "Generic (PLEG): container finished" podID="911ccf29-a1bf-402a-b445-df244f1acb70" containerID="50606fcd420271c285464e348b53f8c669b10842110551df296340c5e230fea3" exitCode=0 Feb 16 13:04:55 crc kubenswrapper[4740]: I0216 13:04:55.814553 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"50606fcd420271c285464e348b53f8c669b10842110551df296340c5e230fea3"} Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.060763 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213179 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213257 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.213318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") pod \"911ccf29-a1bf-402a-b445-df244f1acb70\" (UID: \"911ccf29-a1bf-402a-b445-df244f1acb70\") " Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.215076 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle" (OuterVolumeSpecName: "bundle") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.222378 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9" (OuterVolumeSpecName: "kube-api-access-hd7w9") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "kube-api-access-hd7w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.233987 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util" (OuterVolumeSpecName: "util") pod "911ccf29-a1bf-402a-b445-df244f1acb70" (UID: "911ccf29-a1bf-402a-b445-df244f1acb70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315209 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315244 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/911ccf29-a1bf-402a-b445-df244f1acb70-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.315255 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd7w9\" (UniqueName: \"kubernetes.io/projected/911ccf29-a1bf-402a-b445-df244f1acb70-kube-api-access-hd7w9\") on node \"crc\" DevicePath \"\"" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" event={"ID":"911ccf29-a1bf-402a-b445-df244f1acb70","Type":"ContainerDied","Data":"38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116"} Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829409 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38343100d5fece67f9046cc9eae2af977b7f2c047709b8aa17cc3753b6918116" Feb 16 13:04:57 crc kubenswrapper[4740]: I0216 13:04:57.829433 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500768 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="pull" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500783 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="pull" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500793 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="util" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500802 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="util" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500843 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500854 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: E0216 13:05:06.500868 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.500876 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501009 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="911ccf29-a1bf-402a-b445-df244f1acb70" containerName="extract" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501022 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc3a749-7453-4afe-ba48-f34188be4832" containerName="console" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.501488 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.504921 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5j6l9" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.505082 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.505093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.508968 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.509139 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.522026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623752 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.623933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.725471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.747019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-apiservice-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.747019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97f25eec-68aa-4b48-b40a-08ce0599d525-webhook-cert\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.751585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x77m\" (UniqueName: \"kubernetes.io/projected/97f25eec-68aa-4b48-b40a-08ce0599d525-kube-api-access-6x77m\") pod \"metallb-operator-controller-manager-75b694c59-wkpkw\" (UID: \"97f25eec-68aa-4b48-b40a-08ce0599d525\") " pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:06 crc kubenswrapper[4740]: I0216 13:05:06.824870 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.022096 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.023230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.025509 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.027871 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.028311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jl8tw" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.043020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.130287 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231958 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.231999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.252477 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-webhook-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.253458 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4163a038-60ca-4e8e-bf45-028b04101fc9-apiservice-cert\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.255965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvhtv\" (UniqueName: \"kubernetes.io/projected/4163a038-60ca-4e8e-bf45-028b04101fc9-kube-api-access-kvhtv\") pod \"metallb-operator-webhook-server-7887f4bfcc-9grrx\" (UID: \"4163a038-60ca-4e8e-bf45-028b04101fc9\") " pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.341938 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw"] Feb 16 13:05:07 crc kubenswrapper[4740]: W0216 13:05:07.345519 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f25eec_68aa_4b48_b40a_08ce0599d525.slice/crio-1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40 WatchSource:0}: Error finding container 1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40: Status 404 returned error can't find the container with id 1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40 Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.355760 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.580409 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx"] Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.893733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" event={"ID":"4163a038-60ca-4e8e-bf45-028b04101fc9","Type":"ContainerStarted","Data":"efeffddba81eb0380a0f3f7ef629ee7bb3ee73d08afc3a747fb393664af4e60c"} Feb 16 13:05:07 crc kubenswrapper[4740]: I0216 13:05:07.895219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" event={"ID":"97f25eec-68aa-4b48-b40a-08ce0599d525","Type":"ContainerStarted","Data":"1038e4ab9b75e51525a55a167f5a057ee5a6e969bf7e51e96e5894a017063a40"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.927795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" event={"ID":"97f25eec-68aa-4b48-b40a-08ce0599d525","Type":"ContainerStarted","Data":"3679805d8c4ed93ffb7101358dbe3148cc0ee68f61f7cb173f24187bebd08b49"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.928420 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.929164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" event={"ID":"4163a038-60ca-4e8e-bf45-028b04101fc9","Type":"ContainerStarted","Data":"d3e825656c314ee604b066ac825937e92ee12c7c0ab540fa53c7dcabc5aac78f"} Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.929635 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.943854 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" podStartSLOduration=1.653819235 podStartE2EDuration="6.943840192s" podCreationTimestamp="2026-02-16 13:05:06 +0000 UTC" firstStartedPulling="2026-02-16 13:05:07.348868532 +0000 UTC m=+734.725217263" lastFinishedPulling="2026-02-16 13:05:12.638889499 +0000 UTC m=+740.015238220" observedRunningTime="2026-02-16 13:05:12.943605804 +0000 UTC m=+740.319954525" watchObservedRunningTime="2026-02-16 13:05:12.943840192 +0000 UTC m=+740.320188913" Feb 16 13:05:12 crc kubenswrapper[4740]: I0216 13:05:12.967211 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" podStartSLOduration=0.916660809 podStartE2EDuration="5.967196007s" podCreationTimestamp="2026-02-16 13:05:07 +0000 UTC" firstStartedPulling="2026-02-16 13:05:07.593359482 +0000 UTC m=+734.969708203" lastFinishedPulling="2026-02-16 13:05:12.64389468 +0000 UTC m=+740.020243401" observedRunningTime="2026-02-16 13:05:12.963933191 +0000 UTC m=+740.340281932" watchObservedRunningTime="2026-02-16 13:05:12.967196007 +0000 UTC m=+740.343544728" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575108 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575497 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.575546 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.576447 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.576530 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" gracePeriod=600 Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947099 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" exitCode=0 Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c"} Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} Feb 16 13:05:15 crc kubenswrapper[4740]: I0216 13:05:15.947536 4740 scope.go:117] "RemoveContainer" containerID="681d70fcf74406b10f464682fb9a2ec1afd84c1a626642f4bffe58a31fd272d7" Feb 16 13:05:27 crc kubenswrapper[4740]: I0216 13:05:27.363894 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7887f4bfcc-9grrx" Feb 16 13:05:32 crc kubenswrapper[4740]: I0216 13:05:32.201425 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 13:05:46 crc kubenswrapper[4740]: I0216 13:05:46.828356 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75b694c59-wkpkw" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.547381 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-frlcd"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.550046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.551709 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.552289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gnkqv" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.552329 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.558286 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.558957 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.561028 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.582450 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.635659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ffcm2"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.637570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640599 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.640917 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.641090 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2dwkl" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659386 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.659853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.660419 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.661746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.664460 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.675634 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761636 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761865 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.761977 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-conf\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-sockets\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-reloader\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.762493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.763319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28f2676a-f290-4e9d-9622-d8808c6b8192-frr-startup\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.768062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28f2676a-f290-4e9d-9622-d8808c6b8192-metrics-certs\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.768615 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e220608-2271-4260-bc94-e4d206c718d4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.777761 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7z67\" (UniqueName: \"kubernetes.io/projected/2e220608-2271-4260-bc94-e4d206c718d4-kube-api-access-h7z67\") pod \"frr-k8s-webhook-server-78b44bf5bb-spwnh\" (UID: \"2e220608-2271-4260-bc94-e4d206c718d4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.781890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdp\" (UniqueName: \"kubernetes.io/projected/28f2676a-f290-4e9d-9622-d8808c6b8192-kube-api-access-hnrdp\") pod \"frr-k8s-frlcd\" (UID: \"28f2676a-f290-4e9d-9622-d8808c6b8192\") " pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.863367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863401 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863475 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist podName:05937f4c-8149-4db8-bb5e-e863ae011d92 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:48.363457273 +0000 UTC m=+775.739805994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist") pod "speaker-ffcm2" (UID: "05937f4c-8149-4db8-bb5e-e863ae011d92") : secret "metallb-memberlist" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863663 4740 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 13:05:47 crc kubenswrapper[4740]: E0216 13:05:47.863692 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs podName:e9790ca2-5f44-4c39-a31f-13dc607ab7c4 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:48.36368445 +0000 UTC m=+775.740033171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs") pod "controller-69bbfbf88f-kfv4h" (UID: "e9790ca2-5f44-4c39-a31f-13dc607ab7c4") : secret "controller-certs-secret" not found Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.864256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/05937f4c-8149-4db8-bb5e-e863ae011d92-metallb-excludel2\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.866046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-cert\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.868531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-metrics-certs\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.874159 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.880302 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsbf\" (UniqueName: \"kubernetes.io/projected/05937f4c-8149-4db8-bb5e-e863ae011d92-kube-api-access-6gsbf\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.883225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:47 crc kubenswrapper[4740]: I0216 13:05:47.891148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2xvq\" (UniqueName: \"kubernetes.io/projected/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-kube-api-access-r2xvq\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.092270 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh"] Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.160027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"e274bab74a1626c3b63a6777781609180502777996f79290ada0d89c8ee33d0a"} Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.172954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" event={"ID":"2e220608-2271-4260-bc94-e4d206c718d4","Type":"ContainerStarted","Data":"2430562d3e2cc186a63cd1c2eedaff020e1522b1b01e6f2b9b040be74d4bf7ea"} Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.372926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.372988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: E0216 13:05:48.374001 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:05:48 crc kubenswrapper[4740]: E0216 13:05:48.374080 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist podName:05937f4c-8149-4db8-bb5e-e863ae011d92 nodeName:}" failed. No retries permitted until 2026-02-16 13:05:49.37406064 +0000 UTC m=+776.750409361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist") pod "speaker-ffcm2" (UID: "05937f4c-8149-4db8-bb5e-e863ae011d92") : secret "metallb-memberlist" not found Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.379968 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9790ca2-5f44-4c39-a31f-13dc607ab7c4-metrics-certs\") pod \"controller-69bbfbf88f-kfv4h\" (UID: \"e9790ca2-5f44-4c39-a31f-13dc607ab7c4\") " pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.583338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:48 crc kubenswrapper[4740]: I0216 13:05:48.977101 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-kfv4h"] Feb 16 13:05:48 crc kubenswrapper[4740]: W0216 13:05:48.980651 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9790ca2_5f44_4c39_a31f_13dc607ab7c4.slice/crio-360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59 WatchSource:0}: Error finding container 360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59: Status 404 returned error can't find the container with id 360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59 Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.180770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"ea6b634412a715ccba5cfb746d81f8bec8e13d19011d076a5cd37ff0a5afb482"} Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.180841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"360a211771d715b9576c62ed79f638d32b1e2a23bc3a8a5fee065303e8f59f59"} Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.385970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.392242 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/05937f4c-8149-4db8-bb5e-e863ae011d92-memberlist\") pod \"speaker-ffcm2\" (UID: \"05937f4c-8149-4db8-bb5e-e863ae011d92\") " pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: I0216 13:05:49.457018 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:49 crc kubenswrapper[4740]: W0216 13:05:49.482594 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05937f4c_8149_4db8_bb5e_e863ae011d92.slice/crio-6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf WatchSource:0}: Error finding container 6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf: Status 404 returned error can't find the container with id 6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.450628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"57d3f5806bf8034d7bb2886f40e34b78cb29cea3543d9e2ae8c8eee8cf2fc742"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.450681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"6c670af40654d0a585b7af1c8c7fa5e016913c57edd7e6907ada0543439c7ecf"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.456076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-kfv4h" event={"ID":"e9790ca2-5f44-4c39-a31f-13dc607ab7c4","Type":"ContainerStarted","Data":"9a3105b3cef26fe7e89a716049a5f4b8d0ba5a1e1753252511fa2695f07b1eb3"} Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.456231 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:05:50 crc kubenswrapper[4740]: I0216 13:05:50.479476 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-kfv4h" podStartSLOduration=3.479455648 podStartE2EDuration="3.479455648s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:05:50.473563067 +0000 UTC m=+777.849911788" watchObservedRunningTime="2026-02-16 13:05:50.479455648 +0000 UTC m=+777.855804379" Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.471588 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ffcm2" event={"ID":"05937f4c-8149-4db8-bb5e-e863ae011d92","Type":"ContainerStarted","Data":"2b8d5bfb2f4a6f5b9af31db37b4cd82053b2f220c4d5059d86768f201f1836ec"} Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.471943 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:51 crc kubenswrapper[4740]: I0216 13:05:51.493433 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ffcm2" podStartSLOduration=4.49341574 podStartE2EDuration="4.49341574s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:05:51.492261633 +0000 UTC m=+778.868610354" watchObservedRunningTime="2026-02-16 13:05:51.49341574 +0000 UTC m=+778.869764451" Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.499140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" event={"ID":"2e220608-2271-4260-bc94-e4d206c718d4","Type":"ContainerStarted","Data":"f37d8f01be7f186d0cfd1ec356f9faa69031e4d1c36d5eda02dc2c3176c47bf1"} Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.500319 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.502256 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="f6b7f2324310e20c93c7efce5dad368774d8c3e1361cdf1d57c1e9b44abf9204" exitCode=0 Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.502293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"f6b7f2324310e20c93c7efce5dad368774d8c3e1361cdf1d57c1e9b44abf9204"} Feb 16 13:05:56 crc kubenswrapper[4740]: I0216 13:05:56.521596 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" podStartSLOduration=1.8519845080000001 podStartE2EDuration="9.521574785s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="2026-02-16 13:05:48.09738157 +0000 UTC m=+775.473730291" lastFinishedPulling="2026-02-16 13:05:55.766971847 +0000 UTC m=+783.143320568" observedRunningTime="2026-02-16 13:05:56.515142186 +0000 UTC m=+783.891490907" watchObservedRunningTime="2026-02-16 13:05:56.521574785 +0000 UTC m=+783.897923506" Feb 16 13:05:57 crc kubenswrapper[4740]: I0216 13:05:57.511444 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="7ab45c16a678070f6b9d23152f7761310eb42fda5b420d8113d3b14a1fe146f7" exitCode=0 Feb 16 13:05:57 crc kubenswrapper[4740]: I0216 13:05:57.511536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"7ab45c16a678070f6b9d23152f7761310eb42fda5b420d8113d3b14a1fe146f7"} Feb 16 13:05:58 crc kubenswrapper[4740]: I0216 13:05:58.518685 4740 generic.go:334] "Generic (PLEG): container finished" podID="28f2676a-f290-4e9d-9622-d8808c6b8192" containerID="7470b973696ac614c3457ef3f8faa678bf8c43243a0bcc76f86edc9a4bb7266e" exitCode=0 Feb 16 13:05:58 crc kubenswrapper[4740]: I0216 13:05:58.518768 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerDied","Data":"7470b973696ac614c3457ef3f8faa678bf8c43243a0bcc76f86edc9a4bb7266e"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.462327 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ffcm2" Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527645 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"737bb3575eb39630038ffab597d1ac234e9d4edd2bbb3436450651d4121200b2"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"7f45de26383b0bc8823ec1b95e9d0597f5341d353628a7f3e8ef5c33698e9c30"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527691 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"cad00b7671692e43e7c071ff9c7f0f0c1d003fb5da10f1db712ca5c80d766660"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"b8d1dbfd6b3ab2bef22a57a710d68438ffd7197884828900c1ea356d3a83d4c4"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"6c1f2a78b7fe8008e61aca528fe3ef3de733507f7757166e186ad7c611d24cd6"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527714 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-frlcd" event={"ID":"28f2676a-f290-4e9d-9622-d8808c6b8192","Type":"ContainerStarted","Data":"8c83655ef2e72ce020be45b37b01e16ff04c8efcee2a2ad7426ab57ff39b093c"} Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.527830 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:05:59 crc kubenswrapper[4740]: I0216 13:05:59.560398 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-frlcd" podStartSLOduration=4.852765442 podStartE2EDuration="12.560372523s" podCreationTimestamp="2026-02-16 13:05:47 +0000 UTC" firstStartedPulling="2026-02-16 13:05:48.03422845 +0000 UTC m=+775.410577171" lastFinishedPulling="2026-02-16 13:05:55.741835531 +0000 UTC m=+783.118184252" observedRunningTime="2026-02-16 13:05:59.548559829 +0000 UTC m=+786.924908560" watchObservedRunningTime="2026-02-16 13:05:59.560372523 +0000 UTC m=+786.936721284" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.035784 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.037080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.041322 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.041379 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.042681 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gbz6w" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.053188 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.094086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.196173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.227121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"openstack-operator-index-vgwdx\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.367258 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.768264 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.875407 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:02 crc kubenswrapper[4740]: I0216 13:06:02.915411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:03 crc kubenswrapper[4740]: I0216 13:06:03.556299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerStarted","Data":"d61ae1b6020950c900af7ee3d49ade2d0b0882e3bedf8ab6d6113b5c5284ba1c"} Feb 16 13:06:05 crc kubenswrapper[4740]: I0216 13:06:05.419884 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.026205 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.027419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.035937 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.047399 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.149141 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.170575 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzlr\" (UniqueName: \"kubernetes.io/projected/7fe65e33-ae2e-4f40-b686-454192d6b538-kube-api-access-wxzlr\") pod \"openstack-operator-index-qzt4t\" (UID: \"7fe65e33-ae2e-4f40-b686-454192d6b538\") " pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.347898 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.572518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerStarted","Data":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.572648 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vgwdx" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" containerID="cri-o://901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" gracePeriod=2 Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.595658 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vgwdx" podStartSLOduration=1.8437921099999999 podStartE2EDuration="4.595638264s" podCreationTimestamp="2026-02-16 13:06:02 +0000 UTC" firstStartedPulling="2026-02-16 13:06:02.778063682 +0000 UTC m=+790.154412403" lastFinishedPulling="2026-02-16 13:06:05.529909826 +0000 UTC m=+792.906258557" observedRunningTime="2026-02-16 13:06:06.588838682 +0000 UTC m=+793.965187403" watchObservedRunningTime="2026-02-16 13:06:06.595638264 +0000 UTC m=+793.971986985" Feb 16 13:06:06 crc kubenswrapper[4740]: I0216 13:06:06.766652 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qzt4t"] Feb 16 13:06:06 crc kubenswrapper[4740]: W0216 13:06:06.773262 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fe65e33_ae2e_4f40_b686_454192d6b538.slice/crio-7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa WatchSource:0}: Error finding container 7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa: Status 404 returned error can't find the container with id 7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.041003 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.072332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") pod \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\" (UID: \"1d07676e-d3a5-489e-bd5a-61d7a59a039b\") " Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.077667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm" (OuterVolumeSpecName: "kube-api-access-j2wdm") pod "1d07676e-d3a5-489e-bd5a-61d7a59a039b" (UID: "1d07676e-d3a5-489e-bd5a-61d7a59a039b"). InnerVolumeSpecName "kube-api-access-j2wdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.173556 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wdm\" (UniqueName: \"kubernetes.io/projected/1d07676e-d3a5-489e-bd5a-61d7a59a039b-kube-api-access-j2wdm\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580764 4740 generic.go:334] "Generic (PLEG): container finished" podID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" exitCode=0 Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580892 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgwdx" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerDied","Data":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580971 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgwdx" event={"ID":"1d07676e-d3a5-489e-bd5a-61d7a59a039b","Type":"ContainerDied","Data":"d61ae1b6020950c900af7ee3d49ade2d0b0882e3bedf8ab6d6113b5c5284ba1c"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.580991 4740 scope.go:117] "RemoveContainer" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.584798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qzt4t" event={"ID":"7fe65e33-ae2e-4f40-b686-454192d6b538","Type":"ContainerStarted","Data":"ad823f48706259f0dff98372d32929b637cffd816f125563634c2ecc922fec61"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.584843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qzt4t" event={"ID":"7fe65e33-ae2e-4f40-b686-454192d6b538","Type":"ContainerStarted","Data":"7e65c4ea6e88c545a666e88e2cda9dca512d3162877ddadcf16c7e8d4025f6fa"} Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.600786 4740 scope.go:117] "RemoveContainer" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: E0216 13:06:07.601782 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": container with ID starting with 901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46 not found: ID does not exist" containerID="901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.601906 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46"} err="failed to get container status \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": rpc error: code = NotFound desc = could not find container \"901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46\": container with ID starting with 901678c4dc00d4a7f7a797dc25617ecdcbe877b51b455b53fab65803890beb46 not found: ID does not exist" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.603972 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qzt4t" podStartSLOduration=1.359154813 podStartE2EDuration="1.603954647s" podCreationTimestamp="2026-02-16 13:06:06 +0000 UTC" firstStartedPulling="2026-02-16 13:06:06.777271625 +0000 UTC m=+794.153620346" lastFinishedPulling="2026-02-16 13:06:07.022071459 +0000 UTC m=+794.398420180" observedRunningTime="2026-02-16 13:06:07.602155377 +0000 UTC m=+794.978504098" watchObservedRunningTime="2026-02-16 13:06:07.603954647 +0000 UTC m=+794.980303368" Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.626526 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.631347 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vgwdx"] Feb 16 13:06:07 crc kubenswrapper[4740]: I0216 13:06:07.886611 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-spwnh" Feb 16 13:06:08 crc kubenswrapper[4740]: I0216 13:06:08.588480 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-kfv4h" Feb 16 13:06:09 crc kubenswrapper[4740]: I0216 13:06:09.289558 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" path="/var/lib/kubelet/pods/1d07676e-d3a5-489e-bd5a-61d7a59a039b/volumes" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.349175 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.349887 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.391449 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:16 crc kubenswrapper[4740]: I0216 13:06:16.664189 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qzt4t" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657288 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:17 crc kubenswrapper[4740]: E0216 13:06:17.657847 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657864 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.657999 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d07676e-d3a5-489e-bd5a-61d7a59a039b" containerName="registry-server" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.658978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.661453 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gw6jq" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.672976 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808506 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.808604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.878426 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-frlcd" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909180 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909660 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.909668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.935120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:17 crc kubenswrapper[4740]: I0216 13:06:17.979489 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.373575 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547"] Feb 16 13:06:18 crc kubenswrapper[4740]: W0216 13:06:18.377234 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7597307b_d3fd_4fa0_b370_a6d08b6a2daa.slice/crio-10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c WatchSource:0}: Error finding container 10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c: Status 404 returned error can't find the container with id 10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653181 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="45e4bb3a93c683eb126bdaae78036721713d442e89c860779044cb36b24641cb" exitCode=0 Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"45e4bb3a93c683eb126bdaae78036721713d442e89c860779044cb36b24641cb"} Feb 16 13:06:18 crc kubenswrapper[4740]: I0216 13:06:18.653468 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerStarted","Data":"10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c"} Feb 16 13:06:19 crc kubenswrapper[4740]: I0216 13:06:19.663285 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="9ee4c1a37103c1f1516301a53d50369a420119a42878e1e0f2c39067b42ff149" exitCode=0 Feb 16 13:06:19 crc kubenswrapper[4740]: I0216 13:06:19.663331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"9ee4c1a37103c1f1516301a53d50369a420119a42878e1e0f2c39067b42ff149"} Feb 16 13:06:20 crc kubenswrapper[4740]: I0216 13:06:20.672367 4740 generic.go:334] "Generic (PLEG): container finished" podID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerID="54f863a7f4725951e5ec2bf36ca009e95e79cf583a3f461e06dec8ce0e641e1d" exitCode=0 Feb 16 13:06:20 crc kubenswrapper[4740]: I0216 13:06:20.672580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"54f863a7f4725951e5ec2bf36ca009e95e79cf583a3f461e06dec8ce0e641e1d"} Feb 16 13:06:21 crc kubenswrapper[4740]: I0216 13:06:21.951118 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.065919 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") pod \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\" (UID: \"7597307b-d3fd-4fa0-b370-a6d08b6a2daa\") " Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.066697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle" (OuterVolumeSpecName: "bundle") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.072119 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj" (OuterVolumeSpecName: "kube-api-access-9tztj") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "kube-api-access-9tztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.079442 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util" (OuterVolumeSpecName: "util") pod "7597307b-d3fd-4fa0-b370-a6d08b6a2daa" (UID: "7597307b-d3fd-4fa0-b370-a6d08b6a2daa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167211 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167241 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tztj\" (UniqueName: \"kubernetes.io/projected/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-kube-api-access-9tztj\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.167251 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7597307b-d3fd-4fa0-b370-a6d08b6a2daa-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" event={"ID":"7597307b-d3fd-4fa0-b370-a6d08b6a2daa","Type":"ContainerDied","Data":"10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c"} Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698771 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10739ef70be45f92b8ac9de30cbedcadea2694dd248171f76593607e8b42076c" Feb 16 13:06:22 crc kubenswrapper[4740]: I0216 13:06:22.698456 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.602708 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603244 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="util" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603256 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="util" Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603264 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603271 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: E0216 13:06:24.603281 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="pull" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603287 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="pull" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7597307b-d3fd-4fa0-b370-a6d08b6a2daa" containerName="extract" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.603784 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.606381 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5h76t" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.622915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.699601 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.801037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.820749 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldhj\" (UniqueName: \"kubernetes.io/projected/4c82699a-266c-43ce-acce-32c8aea26c10-kube-api-access-bldhj\") pod \"openstack-operator-controller-init-7f746469c7-kzds7\" (UID: \"4c82699a-266c-43ce-acce-32c8aea26c10\") " pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:24 crc kubenswrapper[4740]: I0216 13:06:24.923136 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:25 crc kubenswrapper[4740]: I0216 13:06:25.349870 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7"] Feb 16 13:06:25 crc kubenswrapper[4740]: I0216 13:06:25.718547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" event={"ID":"4c82699a-266c-43ce-acce-32c8aea26c10","Type":"ContainerStarted","Data":"2d53d6a466f21336466b4ea9dc38f352cc28f4453557bd8464569927a273c38d"} Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.742434 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" event={"ID":"4c82699a-266c-43ce-acce-32c8aea26c10","Type":"ContainerStarted","Data":"170c72ca7f2a2582ad4ff2b7e6a60aa43bc691572b294dcb16c51e81b61ce668"} Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.743031 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:29 crc kubenswrapper[4740]: I0216 13:06:29.773123 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" podStartSLOduration=2.148378333 podStartE2EDuration="5.773100018s" podCreationTimestamp="2026-02-16 13:06:24 +0000 UTC" firstStartedPulling="2026-02-16 13:06:25.369498626 +0000 UTC m=+812.745847347" lastFinishedPulling="2026-02-16 13:06:28.994220271 +0000 UTC m=+816.370569032" observedRunningTime="2026-02-16 13:06:29.767381532 +0000 UTC m=+817.143730253" watchObservedRunningTime="2026-02-16 13:06:29.773100018 +0000 UTC m=+817.149448739" Feb 16 13:06:34 crc kubenswrapper[4740]: I0216 13:06:34.925797 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f746469c7-kzds7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.193456 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.195379 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.199655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.200698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zvdwm" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.200850 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.202481 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-f86fp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.219665 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.233931 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.260112 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.262422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.264885 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lkmts" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.276880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.284041 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.284845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.286805 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-npqnf" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.303727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.320764 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.321790 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.324773 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ff9b6" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.332725 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.332850 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.343898 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.345112 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.353290 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x9s8f" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.359491 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.375165 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.384845 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.385840 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.390508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nczlp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.398709 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.403277 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.403403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.406328 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-z7z7h" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.410079 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.429928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.430796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434475 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434591 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.434673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.435297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dvkmf" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.448033 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.449166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.456680 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.461463 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fr6sk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.461941 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.471303 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.474109 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5vw\" (UniqueName: \"kubernetes.io/projected/f0032304-8799-4a85-964f-2017bfd2dbc8-kube-api-access-md5vw\") pod \"cinder-operator-controller-manager-5d946d989d-rpbmb\" (UID: \"f0032304-8799-4a85-964f-2017bfd2dbc8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.478401 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87w7z\" (UniqueName: \"kubernetes.io/projected/d6090007-0c13-4ea2-823c-3d95bb336fd8-kube-api-access-87w7z\") pod \"barbican-operator-controller-manager-868647ff47-jsfjx\" (UID: \"d6090007-0c13-4ea2-823c-3d95bb336fd8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.495594 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.496664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.499091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2k29d" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.515511 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.520164 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.521039 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.521997 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.530690 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7rqlb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.533106 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.534340 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.535979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536229 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.536281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.537038 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.538342 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.544401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vbtbk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.559310 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.566974 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fn5\" (UniqueName: \"kubernetes.io/projected/90321508-9bb9-458e-ada0-001c779161c1-kube-api-access-54fn5\") pod \"glance-operator-controller-manager-77987464f4-9xbzr\" (UID: \"90321508-9bb9-458e-ada0-001c779161c1\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.568798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcc8\" (UniqueName: \"kubernetes.io/projected/7f22cc6e-3761-4336-ab1d-74d9fd88432c-kube-api-access-plcc8\") pod \"heat-operator-controller-manager-69f49c598c-kk4mh\" (UID: \"7f22cc6e-3761-4336-ab1d-74d9fd88432c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.569010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkgr6\" (UniqueName: \"kubernetes.io/projected/069bdc0e-d9e1-4e93-a6fc-8aa439550dd0-kube-api-access-gkgr6\") pod \"designate-operator-controller-manager-6d8bf5c495-9kqqk\" (UID: \"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.585278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.591793 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.592997 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.595277 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-nk8vd" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.604340 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.634198 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.635749 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.637060 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.637658 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638418 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.638621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.638893 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.638944 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.138923702 +0000 UTC m=+842.515272423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.639740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9xj25" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.651531 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.653366 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.655204 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l8m2m" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.660217 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.671991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpqb\" (UniqueName: \"kubernetes.io/projected/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-kube-api-access-nnpqb\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.673986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krb6v\" (UniqueName: \"kubernetes.io/projected/3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17-kube-api-access-krb6v\") pod \"ironic-operator-controller-manager-554564d7fc-v28lz\" (UID: \"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.680837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlpn6\" (UniqueName: \"kubernetes.io/projected/fdf72675-c282-4f45-ad93-19aa643dcff8-kube-api-access-wlpn6\") pod \"horizon-operator-controller-manager-5b9b8895d5-nl26x\" (UID: \"fdf72675-c282-4f45-ad93-19aa643dcff8\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.684910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6rh\" (UniqueName: \"kubernetes.io/projected/7f932811-4449-440a-b4c7-4817bfb33dd3-kube-api-access-rk6rh\") pod \"manila-operator-controller-manager-54f6768c69-44wdn\" (UID: \"7f932811-4449-440a-b4c7-4817bfb33dd3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.687431 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.703625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.740773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.741676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.760465 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.769705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmx59\" (UniqueName: \"kubernetes.io/projected/a49c1d67-8cf7-4429-ac73-da13d129304d-kube-api-access-pmx59\") pod \"mariadb-operator-controller-manager-6994f66f48-7gw4t\" (UID: \"a49c1d67-8cf7-4429-ac73-da13d129304d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.771841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlj89\" (UniqueName: \"kubernetes.io/projected/121ee83b-e7f1-4302-9455-4cc6f53a07a5-kube-api-access-qlj89\") pod \"neutron-operator-controller-manager-64ddbf8bb-7t92r\" (UID: \"121ee83b-e7f1-4302-9455-4cc6f53a07a5\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.778209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjht\" (UniqueName: \"kubernetes.io/projected/fce48c02-3aa2-404b-a9a4-7ba789835be0-kube-api-access-xjjht\") pod \"keystone-operator-controller-manager-b4d948c87-z2m7j\" (UID: \"fce48c02-3aa2-404b-a9a4-7ba789835be0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.779068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlkp\" (UniqueName: \"kubernetes.io/projected/ba6767b2-e03c-4c12-880d-90bd809d9b48-kube-api-access-rrlkp\") pod \"nova-operator-controller-manager-567668f5cf-fn4g2\" (UID: \"ba6767b2-e03c-4c12-880d-90bd809d9b48\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.796603 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.808415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.823529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f9994" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.825774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845231 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845278 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.845359 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.845503 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: E0216 13:06:54.845545 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.345532875 +0000 UTC m=+842.721881586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.858688 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.870598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnw8\" (UniqueName: \"kubernetes.io/projected/00e4da3c-6d3d-459a-86c2-01a4cdb81e51-kube-api-access-vqnw8\") pod \"octavia-operator-controller-manager-69f8888797-pbpdw\" (UID: \"00e4da3c-6d3d-459a-86c2-01a4cdb81e51\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.875637 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rg6p\" (UniqueName: \"kubernetes.io/projected/76134787-0eff-47bd-982e-16c2c4f98f19-kube-api-access-5rg6p\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.879659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.882638 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.897938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8drdp" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.898935 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.913766 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.932966 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.935303 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.947571 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.948660 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.951830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.951940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.955349 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5fkd4" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.959062 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.977161 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:06:54 crc kubenswrapper[4740]: I0216 13:06:54.985445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ks2f\" (UniqueName: \"kubernetes.io/projected/6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4-kube-api-access-6ks2f\") pod \"ovn-operator-controller-manager-d44cf6b75-gclp4\" (UID: \"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.009077 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.020707 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.062732 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.070221 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.071540 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.073027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.074662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fs5g5" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.081878 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.083133 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlnd6\" (UniqueName: \"kubernetes.io/projected/c6400043-1325-4af3-8c79-4b383441668c-kube-api-access-zlnd6\") pod \"placement-operator-controller-manager-8497b45c89-64xmt\" (UID: \"c6400043-1325-4af3-8c79-4b383441668c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.091144 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.092900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.096487 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-db5db" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.098051 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.117735 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.118663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.120928 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.121001 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.123299 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7wqpg" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.134957 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.155928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.157225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.164937 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.165122 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.165282 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.165356 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.165340916 +0000 UTC m=+843.541689637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.169351 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-q9wn8" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.184166 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.184530 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.197912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6c9q\" (UniqueName: \"kubernetes.io/projected/04f86073-3515-4d62-a02a-c63d06ecdaaa-kube-api-access-z6c9q\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cnxhk\" (UID: \"04f86073-3515-4d62-a02a-c63d06ecdaaa\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.201409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7z9\" (UniqueName: \"kubernetes.io/projected/519c5b9e-ed4f-4cba-a731-70a22209f642-kube-api-access-wf7z9\") pod \"swift-operator-controller-manager-68f46476f-6865b\" (UID: \"519c5b9e-ed4f-4cba-a731-70a22209f642\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.231637 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.255207 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266678 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.266955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.267022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.284324 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6q9\" (UniqueName: \"kubernetes.io/projected/7666c640-a9f4-4e09-b79c-7fd31116bd79-kube-api-access-dt6q9\") pod \"test-operator-controller-manager-7866795846-58cw4\" (UID: \"7666c640-a9f4-4e09-b79c-7fd31116bd79\") " pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.301372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.303882 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.303996 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.371987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.372022 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.372046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.372874 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.372931 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.372915891 +0000 UTC m=+843.749264602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373111 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373133 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.873126618 +0000 UTC m=+843.249475339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373537 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.373560 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:55.873553541 +0000 UTC m=+843.249902262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.404403 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh6q\" (UniqueName: \"kubernetes.io/projected/e749615e-a716-4e6e-8830-947b128e4e58-kube-api-access-9dh6q\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.404454 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m64zf\" (UniqueName: \"kubernetes.io/projected/3e6434b1-64ba-481f-b001-8a465254dc0a-kube-api-access-m64zf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qttct\" (UID: \"3e6434b1-64ba-481f-b001-8a465254dc0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.405154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdp5\" (UniqueName: \"kubernetes.io/projected/001719d5-3a51-4f6b-b316-9e98f53ed575-kube-api-access-dqdp5\") pod \"watcher-operator-controller-manager-5db88f68c-pbkbj\" (UID: \"001719d5-3a51-4f6b-b316-9e98f53ed575\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.489033 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.511640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.517902 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.611651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.638737 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.656395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.666590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.871224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.874463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.878957 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.879024 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879160 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879209 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.879193171 +0000 UTC m=+844.255541892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879571 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: E0216 13:06:55.879606 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:56.879597274 +0000 UTC m=+844.255945995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.893907 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.907716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.914386 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4"] Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.914969 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" event={"ID":"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0","Type":"ContainerStarted","Data":"7d0f3608ca03eacaef9a545c7cf65f78e64cdce34bd3efd92621873484da969a"} Feb 16 13:06:55 crc kubenswrapper[4740]: W0216 13:06:55.915059 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d65efdf_ffc7_44cd_9dd1_1b4d9be2e2a4.slice/crio-3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3 WatchSource:0}: Error finding container 3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3: Status 404 returned error can't find the container with id 3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3 Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.917584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" event={"ID":"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17","Type":"ContainerStarted","Data":"d15fad7071d80c28ca043afc7c38fcf3eda19d434122892002dc81454df9733d"} Feb 16 13:06:55 crc kubenswrapper[4740]: W0216 13:06:55.921009 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e4da3c_6d3d_459a_86c2_01a4cdb81e51.slice/crio-a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6 WatchSource:0}: Error finding container a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6: Status 404 returned error can't find the container with id a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6 Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.927201 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" event={"ID":"90321508-9bb9-458e-ada0-001c779161c1","Type":"ContainerStarted","Data":"47e337add0de33eadbd3b4f34dfa4abf1892de3b8d0cfea22c19f062207445f6"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.930573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" event={"ID":"d6090007-0c13-4ea2-823c-3d95bb336fd8","Type":"ContainerStarted","Data":"b669f692e024f5d866ad7c1df52c3ebc51d809e755eeaad6389a6fa77398e917"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.934294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" event={"ID":"7f22cc6e-3761-4336-ab1d-74d9fd88432c","Type":"ContainerStarted","Data":"9b89598cdcbd10918871fe779bfb1498d7ac79fb9d16f4d8005e0a1bdc9ef53f"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.935531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" event={"ID":"f0032304-8799-4a85-964f-2017bfd2dbc8","Type":"ContainerStarted","Data":"4da242de7dd6711d83ce18efc38b20f8fb33a586b270cbf42e0fabc398651ac9"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.936380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" event={"ID":"fdf72675-c282-4f45-ad93-19aa643dcff8","Type":"ContainerStarted","Data":"2a40e526f6321380411e8c5383dfb3a989e186a2647040ccdd2f30f690eb4ac6"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.937285 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" event={"ID":"7f932811-4449-440a-b4c7-4817bfb33dd3","Type":"ContainerStarted","Data":"55e53ced0c147ac31a7db6db379fc2d1e2f7bcfb961db6ed3d6c10307eb87d2a"} Feb 16 13:06:55 crc kubenswrapper[4740]: I0216 13:06:55.938610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" event={"ID":"fce48c02-3aa2-404b-a9a4-7ba789835be0","Type":"ContainerStarted","Data":"b9be96baaeafcbde450bac1e9f2e73fe8ce255ecf5fa6de29af393feae35af53"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.068924 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.076910 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.083577 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.087928 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba6767b2_e03c_4c12_880d_90bd809d9b48.slice/crio-af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197 WatchSource:0}: Error finding container af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197: Status 404 returned error can't find the container with id af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197 Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.089086 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod121ee83b_e7f1_4302_9455_4cc6f53a07a5.slice/crio-011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f WatchSource:0}: Error finding container 011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f: Status 404 returned error can't find the container with id 011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.090078 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04f86073_3515_4d62_a02a_c63d06ecdaaa.slice/crio-599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db WatchSource:0}: Error finding container 599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db: Status 404 returned error can't find the container with id 599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.093751 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlj89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-7t92r_openstack-operators(121ee83b-e7f1-4302-9455-4cc6f53a07a5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.095141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.177743 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-58cw4"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.185093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.185238 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.185289 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.185270985 +0000 UTC m=+845.561619706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.185912 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6865b"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.189944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.190769 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod519c5b9e_ed4f_4cba_a731_70a22209f642.slice/crio-4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065 WatchSource:0}: Error finding container 4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065: Status 404 returned error can't find the container with id 4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065 Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.193993 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wf7z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6865b_openstack-operators(519c5b9e-ed4f-4cba-a731-70a22209f642): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.195197 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.212694 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlnd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-64xmt_openstack-operators(c6400043-1325-4af3-8c79-4b383441668c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.212764 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dt6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-58cw4_openstack-operators(7666c640-a9f4-4e09-b79c-7fd31116bd79): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.214456 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.214605 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.317408 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct"] Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.321269 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj"] Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.327913 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e6434b1_64ba_481f_b001_8a465254dc0a.slice/crio-bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41 WatchSource:0}: Error finding container bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41: Status 404 returned error can't find the container with id bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41 Feb 16 13:06:56 crc kubenswrapper[4740]: W0216 13:06:56.335385 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod001719d5_3a51_4f6b_b316_9e98f53ed575.slice/crio-9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244 WatchSource:0}: Error finding container 9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244: Status 404 returned error can't find the container with id 9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244 Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.339860 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqdp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-pbkbj_openstack-operators(001719d5-3a51-4f6b-b316-9e98f53ed575): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.341230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.389306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.389455 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.390952 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.390931338 +0000 UTC m=+845.767280059 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.896556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.896611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896736 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896778 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896804 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.896784734 +0000 UTC m=+846.273133455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.896880 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:06:58.896870777 +0000 UTC m=+846.273219498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.953453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" event={"ID":"519c5b9e-ed4f-4cba-a731-70a22209f642","Type":"ContainerStarted","Data":"4bd7259dfa7349a747eca1148b334f44e5d17510455f9588bd735510a7e35065"} Feb 16 13:06:56 crc kubenswrapper[4740]: E0216 13:06:56.958947 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.960159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" event={"ID":"3e6434b1-64ba-481f-b001-8a465254dc0a","Type":"ContainerStarted","Data":"bc9fd7fcca035933ef6698d62b80243dba583386f3d44ca975cdf045c8c34e41"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.991587 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" event={"ID":"ba6767b2-e03c-4c12-880d-90bd809d9b48","Type":"ContainerStarted","Data":"af317feb0657db68683948e679ce49efca40c57a063a4a234d08c5a39b998197"} Feb 16 13:06:56 crc kubenswrapper[4740]: I0216 13:06:56.999179 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" event={"ID":"a49c1d67-8cf7-4429-ac73-da13d129304d","Type":"ContainerStarted","Data":"0385dab7ad9131cea67b9c8c27e8b5b113798f5cf90514a26bd1963ffd3614b3"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.006744 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" event={"ID":"7666c640-a9f4-4e09-b79c-7fd31116bd79","Type":"ContainerStarted","Data":"a65407aff5a17c047ca3fc566d9eb9da30e8b17d0332823937c418a04667a0a0"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.008458 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" event={"ID":"121ee83b-e7f1-4302-9455-4cc6f53a07a5","Type":"ContainerStarted","Data":"011c468b651f237ba5b2a19d837b732b2903e4f88d5de8a1ced5c096cb6e078f"} Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.010751 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.017158 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.024414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" event={"ID":"00e4da3c-6d3d-459a-86c2-01a4cdb81e51","Type":"ContainerStarted","Data":"a42d5a1f407d2d60b373504c1fe21c0ee0fc8cd9cddd09a374eadfbeea651fb6"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.039421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" event={"ID":"04f86073-3515-4d62-a02a-c63d06ecdaaa","Type":"ContainerStarted","Data":"599e5d5092c1435efa05145eb9aa2b60d1dc27665ac9f4992bd7c7e4bfc841db"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.043615 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" event={"ID":"001719d5-3a51-4f6b-b316-9e98f53ed575","Type":"ContainerStarted","Data":"9bf8214a64c4d3745cac983cd9f790cd6e27ea7ca79929e0ccd79012cc482244"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.050306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" event={"ID":"c6400043-1325-4af3-8c79-4b383441668c","Type":"ContainerStarted","Data":"5a8d5f49d18add468072f26ba9b7bf5bff6f197fcc497101efb9222b6c62cd23"} Feb 16 13:06:57 crc kubenswrapper[4740]: I0216 13:06:57.055089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" event={"ID":"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4","Type":"ContainerStarted","Data":"3a774460b6d91808f5f33e259f0e0ff615f9441aa4f241daac90cbe8b956a6b3"} Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.059276 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:57 crc kubenswrapper[4740]: E0216 13:06:57.059366 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.064961 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podUID="519c5b9e-ed4f-4cba-a731-70a22209f642" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065516 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podUID="c6400043-1325-4af3-8c79-4b383441668c" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065577 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podUID="7666c640-a9f4-4e09-b79c-7fd31116bd79" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065629 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podUID="121ee83b-e7f1-4302-9455-4cc6f53a07a5" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.065704 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podUID="001719d5-3a51-4f6b-b316-9e98f53ed575" Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.218541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.218669 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.218825 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.218789709 +0000 UTC m=+849.595138430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.423045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.424696 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.424745 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.42473177 +0000 UTC m=+849.801080491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.930014 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:58 crc kubenswrapper[4740]: I0216 13:06:58.930096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930256 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930317 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.930297448 +0000 UTC m=+850.306646179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930742 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:06:58 crc kubenswrapper[4740]: E0216 13:06:58.930782 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:02.930772513 +0000 UTC m=+850.307121234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: I0216 13:07:02.289153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.289465 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.290668 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:10.290645583 +0000 UTC m=+857.666994304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: I0216 13:07:02.493955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.494175 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:02 crc kubenswrapper[4740]: E0216 13:07:02.494257 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:10.494238118 +0000 UTC m=+857.870586829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: I0216 13:07:03.001105 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:03 crc kubenswrapper[4740]: I0216 13:07:03.001183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001317 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001400 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001436 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:11.001410627 +0000 UTC m=+858.377759358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:03 crc kubenswrapper[4740]: E0216 13:07:03.001481 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:11.001460859 +0000 UTC m=+858.377809580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.033291 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.034058 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m64zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qttct_openstack-operators(3e6434b1-64ba-481f-b001-8a465254dc0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.035628 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podUID="3e6434b1-64ba-481f-b001-8a465254dc0a" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.143303 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podUID="3e6434b1-64ba-481f-b001-8a465254dc0a" Feb 16 13:07:10 crc kubenswrapper[4740]: I0216 13:07:10.319971 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.320101 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.320151 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert podName:4eba30c7-3dab-4b8f-8a22-2dae642a6ac5 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:26.320137708 +0000 UTC m=+873.696486429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert") pod "infra-operator-controller-manager-79d975b745-s8wc5" (UID: "4eba30c7-3dab-4b8f-8a22-2dae642a6ac5") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: I0216 13:07:10.523252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.523461 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.523550 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert podName:76134787-0eff-47bd-982e-16c2c4f98f19 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:26.523526927 +0000 UTC m=+873.899875708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" (UID: "76134787-0eff-47bd-982e-16c2c4f98f19") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.563126 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.563386 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjjht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-z2m7j_openstack-operators(fce48c02-3aa2-404b-a9a4-7ba789835be0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:10 crc kubenswrapper[4740]: E0216 13:07:10.565388 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podUID="fce48c02-3aa2-404b-a9a4-7ba789835be0" Feb 16 13:07:11 crc kubenswrapper[4740]: I0216 13:07:11.030250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:11 crc kubenswrapper[4740]: I0216 13:07:11.030447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030497 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030592 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030603 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:27.030578922 +0000 UTC m=+874.406927653 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "metrics-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.030669 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs podName:e749615e-a716-4e6e-8830-947b128e4e58 nodeName:}" failed. No retries permitted until 2026-02-16 13:07:27.030650905 +0000 UTC m=+874.406999706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs") pod "openstack-operator-controller-manager-5cd688d8fc-7shgl" (UID: "e749615e-a716-4e6e-8830-947b128e4e58") : secret "webhook-server-cert" not found Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.132665 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.132870 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrlkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-fn4g2_openstack-operators(ba6767b2-e03c-4c12-880d-90bd809d9b48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.134419 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podUID="ba6767b2-e03c-4c12-880d-90bd809d9b48" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.162085 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podUID="ba6767b2-e03c-4c12-880d-90bd809d9b48" Feb 16 13:07:11 crc kubenswrapper[4740]: E0216 13:07:11.162298 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podUID="fce48c02-3aa2-404b-a9a4-7ba789835be0" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.156454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" event={"ID":"04f86073-3515-4d62-a02a-c63d06ecdaaa","Type":"ContainerStarted","Data":"84e0c2ad86b4b346bf7c5e722ea5b1553d4a9ceffbd1769f6e2ab3f0bbebf2be"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.157526 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.163586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" event={"ID":"fdf72675-c282-4f45-ad93-19aa643dcff8","Type":"ContainerStarted","Data":"ed84f9bc75348d4de5e76b4a1258f7231376e554f543995fc9f51e997bea6b8a"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.164203 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.173094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" event={"ID":"3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17","Type":"ContainerStarted","Data":"5e12839da7ef9d73f0abaf53e6765c0811604425ebc887f5fd89416794867fad"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.173735 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.176958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" event={"ID":"90321508-9bb9-458e-ada0-001c779161c1","Type":"ContainerStarted","Data":"0663476f63d36bde41abe9508a187caac021094503bed92dddba2140dfa81573"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.177350 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.190033 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" event={"ID":"7f22cc6e-3761-4336-ab1d-74d9fd88432c","Type":"ContainerStarted","Data":"97e6ced6e91be15d4adf0e418a7c0e5edd3962baae38b8b1fbd006434c51def0"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.190178 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.195161 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" event={"ID":"00e4da3c-6d3d-459a-86c2-01a4cdb81e51","Type":"ContainerStarted","Data":"d1913dc70e1cc4caba3d347773ae29e7a522c4d9cff123497501ebf4d5b011d3"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.195361 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.205640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" event={"ID":"a49c1d67-8cf7-4429-ac73-da13d129304d","Type":"ContainerStarted","Data":"d9bdf62562c2e6bd794cd16d311402868a6ad520153614fecea588155a8ebd23"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.206453 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.217933 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" podStartSLOduration=3.786177252 podStartE2EDuration="18.217910801s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.092980136 +0000 UTC m=+843.469328867" lastFinishedPulling="2026-02-16 13:07:10.524713695 +0000 UTC m=+857.901062416" observedRunningTime="2026-02-16 13:07:12.20371048 +0000 UTC m=+859.580059201" watchObservedRunningTime="2026-02-16 13:07:12.217910801 +0000 UTC m=+859.594259522" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.218951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" event={"ID":"7f932811-4449-440a-b4c7-4817bfb33dd3","Type":"ContainerStarted","Data":"86c23ad09326d1bfc0140d4535854a9f7dba8a9e2b8579db0832aaa56b1d9748"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.219060 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.228385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" event={"ID":"d6090007-0c13-4ea2-823c-3d95bb336fd8","Type":"ContainerStarted","Data":"51237ef57631201d6327bcfabb8e5bd4b44ee141f5b4fade5936342435c24da9"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.228967 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.231232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" event={"ID":"069bdc0e-d9e1-4e93-a6fc-8aa439550dd0","Type":"ContainerStarted","Data":"ad8f5f3cb83d2ac1177f2f7eb89142497decd4142cde55e7f0d38b2163308524"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.231633 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.233854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" event={"ID":"f0032304-8799-4a85-964f-2017bfd2dbc8","Type":"ContainerStarted","Data":"0641538bef3b64eaf3658ca0464a0ecb233e70aa265fac3e7c1dcd2bb70648d9"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.234315 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.235841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" event={"ID":"6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4","Type":"ContainerStarted","Data":"d7d93153ca61ea6d634b2eae1fab9ea259c611289280ff2c9db9a61198ba1f99"} Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.236176 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.272860 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" podStartSLOduration=3.440859663 podStartE2EDuration="18.272841796s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.691543034 +0000 UTC m=+843.067891755" lastFinishedPulling="2026-02-16 13:07:10.523525167 +0000 UTC m=+857.899873888" observedRunningTime="2026-02-16 13:07:12.246013994 +0000 UTC m=+859.622362715" watchObservedRunningTime="2026-02-16 13:07:12.272841796 +0000 UTC m=+859.649190517" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.273038 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" podStartSLOduration=3.100680738 podStartE2EDuration="18.273034902s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.930534118 +0000 UTC m=+843.306882839" lastFinishedPulling="2026-02-16 13:07:11.102888282 +0000 UTC m=+858.479237003" observedRunningTime="2026-02-16 13:07:12.268736072 +0000 UTC m=+859.645084803" watchObservedRunningTime="2026-02-16 13:07:12.273034902 +0000 UTC m=+859.649383623" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.308251 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" podStartSLOduration=3.47186374 podStartE2EDuration="18.308235495s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.689420505 +0000 UTC m=+843.065769226" lastFinishedPulling="2026-02-16 13:07:10.52579218 +0000 UTC m=+857.902140981" observedRunningTime="2026-02-16 13:07:12.307928155 +0000 UTC m=+859.684276866" watchObservedRunningTime="2026-02-16 13:07:12.308235495 +0000 UTC m=+859.684584216" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.336875 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" podStartSLOduration=2.624265129 podStartE2EDuration="18.336857186s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.387420991 +0000 UTC m=+842.763769712" lastFinishedPulling="2026-02-16 13:07:11.100013058 +0000 UTC m=+858.476361769" observedRunningTime="2026-02-16 13:07:12.336299787 +0000 UTC m=+859.712648508" watchObservedRunningTime="2026-02-16 13:07:12.336857186 +0000 UTC m=+859.713205917" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.363603 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" podStartSLOduration=3.515790108 podStartE2EDuration="18.363582775s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.678641384 +0000 UTC m=+843.054990105" lastFinishedPulling="2026-02-16 13:07:10.526434051 +0000 UTC m=+857.902782772" observedRunningTime="2026-02-16 13:07:12.356550956 +0000 UTC m=+859.732899677" watchObservedRunningTime="2026-02-16 13:07:12.363582775 +0000 UTC m=+859.739931486" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.382630 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" podStartSLOduration=3.77535102 podStartE2EDuration="18.382606842s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.91919246 +0000 UTC m=+843.295541181" lastFinishedPulling="2026-02-16 13:07:10.526448282 +0000 UTC m=+857.902797003" observedRunningTime="2026-02-16 13:07:12.377917559 +0000 UTC m=+859.754266290" watchObservedRunningTime="2026-02-16 13:07:12.382606842 +0000 UTC m=+859.758955563" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.397938 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" podStartSLOduration=3.7894790499999997 podStartE2EDuration="18.39792288s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.916822863 +0000 UTC m=+843.293171584" lastFinishedPulling="2026-02-16 13:07:10.525266693 +0000 UTC m=+857.901615414" observedRunningTime="2026-02-16 13:07:12.396744672 +0000 UTC m=+859.773093393" watchObservedRunningTime="2026-02-16 13:07:12.39792288 +0000 UTC m=+859.774271601" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.422483 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" podStartSLOduration=3.261661531 podStartE2EDuration="18.422458028s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.364376513 +0000 UTC m=+842.740725234" lastFinishedPulling="2026-02-16 13:07:10.52517301 +0000 UTC m=+857.901521731" observedRunningTime="2026-02-16 13:07:12.412824994 +0000 UTC m=+859.789173725" watchObservedRunningTime="2026-02-16 13:07:12.422458028 +0000 UTC m=+859.798806759" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.433902 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" podStartSLOduration=3.530894457 podStartE2EDuration="18.433883328s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.623009716 +0000 UTC m=+842.999358437" lastFinishedPulling="2026-02-16 13:07:10.525998597 +0000 UTC m=+857.902347308" observedRunningTime="2026-02-16 13:07:12.433554428 +0000 UTC m=+859.809903149" watchObservedRunningTime="2026-02-16 13:07:12.433883328 +0000 UTC m=+859.810232059" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.457744 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" podStartSLOduration=3.242190717 podStartE2EDuration="18.457723583s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.884958118 +0000 UTC m=+843.261306839" lastFinishedPulling="2026-02-16 13:07:11.100490984 +0000 UTC m=+858.476839705" observedRunningTime="2026-02-16 13:07:12.447290924 +0000 UTC m=+859.823639665" watchObservedRunningTime="2026-02-16 13:07:12.457723583 +0000 UTC m=+859.834072314" Feb 16 13:07:12 crc kubenswrapper[4740]: I0216 13:07:12.483197 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" podStartSLOduration=3.246012681 podStartE2EDuration="18.48317076s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.289771199 +0000 UTC m=+842.666119910" lastFinishedPulling="2026-02-16 13:07:10.526929278 +0000 UTC m=+857.903277989" observedRunningTime="2026-02-16 13:07:12.478540739 +0000 UTC m=+859.854889460" watchObservedRunningTime="2026-02-16 13:07:12.48317076 +0000 UTC m=+859.859519481" Feb 16 13:07:15 crc kubenswrapper[4740]: I0216 13:07:15.575072 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:15 crc kubenswrapper[4740]: I0216 13:07:15.575474 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.277549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" event={"ID":"7666c640-a9f4-4e09-b79c-7fd31116bd79","Type":"ContainerStarted","Data":"782e3861203e4b66f7375516abf19be229ad01a72ce37691a3dcfc0d91b53915"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.278098 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.279651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" event={"ID":"121ee83b-e7f1-4302-9455-4cc6f53a07a5","Type":"ContainerStarted","Data":"df3d9b98206d044894775611ac1ef31c9aa82f4b3330bdc50e132795e701feca"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.279802 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" event={"ID":"001719d5-3a51-4f6b-b316-9e98f53ed575","Type":"ContainerStarted","Data":"cb92ec2f351b1d9e65c1ba15213791e2ba9ddc8481f16300d327311889bd54b7"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287534 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" event={"ID":"c6400043-1325-4af3-8c79-4b383441668c","Type":"ContainerStarted","Data":"ee1a832448add5257d492e9462714124a4be4dbb893756aacc6fd3e59955c8ba"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" event={"ID":"519c5b9e-ed4f-4cba-a731-70a22209f642","Type":"ContainerStarted","Data":"1b47cebdd371945a19bfa9f9e749331f6957e508cc1023abd834463c292cf490"} Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.287914 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.299844 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" podStartSLOduration=3.455433416 podStartE2EDuration="23.299826993s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.21249115 +0000 UTC m=+843.588839871" lastFinishedPulling="2026-02-16 13:07:16.056884727 +0000 UTC m=+863.433233448" observedRunningTime="2026-02-16 13:07:17.294785299 +0000 UTC m=+864.671134020" watchObservedRunningTime="2026-02-16 13:07:17.299826993 +0000 UTC m=+864.676175714" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.309938 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" podStartSLOduration=3.592764379 podStartE2EDuration="23.309920682s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.339727094 +0000 UTC m=+843.716075815" lastFinishedPulling="2026-02-16 13:07:16.056883397 +0000 UTC m=+863.433232118" observedRunningTime="2026-02-16 13:07:17.30679502 +0000 UTC m=+864.683143741" watchObservedRunningTime="2026-02-16 13:07:17.309920682 +0000 UTC m=+864.686269403" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.323284 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" podStartSLOduration=3.435780948 podStartE2EDuration="23.323261195s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.212526952 +0000 UTC m=+843.588875673" lastFinishedPulling="2026-02-16 13:07:16.100007189 +0000 UTC m=+863.476355920" observedRunningTime="2026-02-16 13:07:17.31849647 +0000 UTC m=+864.694845191" watchObservedRunningTime="2026-02-16 13:07:17.323261195 +0000 UTC m=+864.699609916" Feb 16 13:07:17 crc kubenswrapper[4740]: I0216 13:07:17.339045 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" podStartSLOduration=3.302223367 podStartE2EDuration="23.339024907s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.093617247 +0000 UTC m=+843.469965968" lastFinishedPulling="2026-02-16 13:07:16.130418747 +0000 UTC m=+863.506767508" observedRunningTime="2026-02-16 13:07:17.332967441 +0000 UTC m=+864.709316192" watchObservedRunningTime="2026-02-16 13:07:17.339024907 +0000 UTC m=+864.715373628" Feb 16 13:07:22 crc kubenswrapper[4740]: I0216 13:07:22.308074 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" podStartSLOduration=8.444854984 podStartE2EDuration="28.308045902s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.193782352 +0000 UTC m=+843.570131073" lastFinishedPulling="2026-02-16 13:07:16.05697328 +0000 UTC m=+863.433321991" observedRunningTime="2026-02-16 13:07:17.355265205 +0000 UTC m=+864.731613936" watchObservedRunningTime="2026-02-16 13:07:22.308045902 +0000 UTC m=+869.684394623" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.342586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" event={"ID":"fce48c02-3aa2-404b-a9a4-7ba789835be0","Type":"ContainerStarted","Data":"2d3cbc07bcfc6f98ac797b064aa71ca152850ed6b1531ee6a4137422447ec884"} Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.343326 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.344165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" event={"ID":"3e6434b1-64ba-481f-b001-8a465254dc0a","Type":"ContainerStarted","Data":"7cfcf45a9d3c0e3a79357088f866787e4046616eef45b1b7cb38b2b088f4f874"} Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.370255 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" podStartSLOduration=2.790622815 podStartE2EDuration="30.370237147s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:55.916638147 +0000 UTC m=+843.292986868" lastFinishedPulling="2026-02-16 13:07:23.496252479 +0000 UTC m=+870.872601200" observedRunningTime="2026-02-16 13:07:24.365517014 +0000 UTC m=+871.741865755" watchObservedRunningTime="2026-02-16 13:07:24.370237147 +0000 UTC m=+871.746585878" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.384193 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qttct" podStartSLOduration=2.940517075 podStartE2EDuration="30.3841762s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.330146983 +0000 UTC m=+843.706495704" lastFinishedPulling="2026-02-16 13:07:23.773806108 +0000 UTC m=+871.150154829" observedRunningTime="2026-02-16 13:07:24.38017177 +0000 UTC m=+871.756520501" watchObservedRunningTime="2026-02-16 13:07:24.3841762 +0000 UTC m=+871.760524921" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.524072 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jsfjx" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.541519 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rpbmb" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.593545 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9kqqk" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.640079 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9xbzr" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.662919 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kk4mh" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.694873 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-nl26x" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.711376 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-v28lz" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.767475 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-44wdn" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.838199 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7gw4t" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.964714 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-7t92r" Feb 16 13:07:24 crc kubenswrapper[4740]: I0216 13:07:24.981002 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pbpdw" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.023554 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-gclp4" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.185489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.187454 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-64xmt" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.255637 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.258181 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6865b" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.303547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cnxhk" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.493109 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-58cw4" Feb 16 13:07:25 crc kubenswrapper[4740]: I0216 13:07:25.521184 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-pbkbj" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.358402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" event={"ID":"ba6767b2-e03c-4c12-880d-90bd809d9b48","Type":"ContainerStarted","Data":"b67dd3080518a824d8576b9acac0f3e600b96fde499ccc9f8e8d8181a0798355"} Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.358927 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.367844 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.374673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4eba30c7-3dab-4b8f-8a22-2dae642a6ac5-cert\") pod \"infra-operator-controller-manager-79d975b745-s8wc5\" (UID: \"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.378747 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" podStartSLOduration=2.4767722770000002 podStartE2EDuration="32.378723607s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:06:56.093324888 +0000 UTC m=+843.469673609" lastFinishedPulling="2026-02-16 13:07:25.995276198 +0000 UTC m=+873.371624939" observedRunningTime="2026-02-16 13:07:26.3754256 +0000 UTC m=+873.751774381" watchObservedRunningTime="2026-02-16 13:07:26.378723607 +0000 UTC m=+873.755072328" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.532740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.571423 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.575493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76134787-0eff-47bd-982e-16c2c4f98f19-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7\" (UID: \"76134787-0eff-47bd-982e-16c2c4f98f19\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.799665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:26 crc kubenswrapper[4740]: I0216 13:07:26.962137 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5"] Feb 16 13:07:26 crc kubenswrapper[4740]: W0216 13:07:26.974627 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eba30c7_3dab_4b8f_8a22_2dae642a6ac5.slice/crio-12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85 WatchSource:0}: Error finding container 12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85: Status 404 returned error can't find the container with id 12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85 Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.059518 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7"] Feb 16 13:07:27 crc kubenswrapper[4740]: W0216 13:07:27.066595 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76134787_0eff_47bd_982e_16c2c4f98f19.slice/crio-bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825 WatchSource:0}: Error finding container bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825: Status 404 returned error can't find the container with id bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825 Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.077852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.077911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.083064 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-webhook-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.083159 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e749615e-a716-4e6e-8830-947b128e4e58-metrics-certs\") pod \"openstack-operator-controller-manager-5cd688d8fc-7shgl\" (UID: \"e749615e-a716-4e6e-8830-947b128e4e58\") " pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.335285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.369760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" event={"ID":"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5","Type":"ContainerStarted","Data":"12db95b6245dace301ab5f48c390155296e356708f774fdbf78c2d5e6aedef85"} Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.371918 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" event={"ID":"76134787-0eff-47bd-982e-16c2c4f98f19","Type":"ContainerStarted","Data":"bf615d4633589a56c04637729baa7a9a047313ef5e9b1c4728954b5e9dd0c825"} Feb 16 13:07:27 crc kubenswrapper[4740]: I0216 13:07:27.595068 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl"] Feb 16 13:07:27 crc kubenswrapper[4740]: W0216 13:07:27.602931 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode749615e_a716_4e6e_8830_947b128e4e58.slice/crio-4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34 WatchSource:0}: Error finding container 4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34: Status 404 returned error can't find the container with id 4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34 Feb 16 13:07:28 crc kubenswrapper[4740]: I0216 13:07:28.378243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" event={"ID":"e749615e-a716-4e6e-8830-947b128e4e58","Type":"ContainerStarted","Data":"4d1c8aeeef6837c0d18452e9f1a36f7063af6ab762697d4d3dc047e595c97c34"} Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:33.413192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" event={"ID":"e749615e-a716-4e6e-8830-947b128e4e58","Type":"ContainerStarted","Data":"56f7e74b441774cbf8d353d6d64eee38fb2c980ad13142bf575e045081561cb5"} Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.419106 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.457794 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" podStartSLOduration=40.457768363 podStartE2EDuration="40.457768363s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:07:34.445266256 +0000 UTC m=+881.821614987" watchObservedRunningTime="2026-02-16 13:07:34.457768363 +0000 UTC m=+881.834117084" Feb 16 13:07:34 crc kubenswrapper[4740]: I0216 13:07:34.935951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fn4g2" Feb 16 13:07:35 crc kubenswrapper[4740]: I0216 13:07:35.076625 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-z2m7j" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.461293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" event={"ID":"76134787-0eff-47bd-982e-16c2c4f98f19","Type":"ContainerStarted","Data":"c802d10363fd4c0de847e852822b166249f1a99e639fc391869de9e542865e15"} Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.461930 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.462921 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" event={"ID":"4eba30c7-3dab-4b8f-8a22-2dae642a6ac5","Type":"ContainerStarted","Data":"09bbb730533db26d01ef27bda2c918a420e8d92fb9e674e07228303420ab8218"} Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.463419 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.494963 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" podStartSLOduration=34.062029282 podStartE2EDuration="46.494943874s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:07:27.068750618 +0000 UTC m=+874.445099339" lastFinishedPulling="2026-02-16 13:07:39.50166521 +0000 UTC m=+886.878013931" observedRunningTime="2026-02-16 13:07:40.487193952 +0000 UTC m=+887.863542673" watchObservedRunningTime="2026-02-16 13:07:40.494943874 +0000 UTC m=+887.871292595" Feb 16 13:07:40 crc kubenswrapper[4740]: I0216 13:07:40.513881 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" podStartSLOduration=33.994115504 podStartE2EDuration="46.513857338s" podCreationTimestamp="2026-02-16 13:06:54 +0000 UTC" firstStartedPulling="2026-02-16 13:07:26.978802295 +0000 UTC m=+874.355151016" lastFinishedPulling="2026-02-16 13:07:39.498544129 +0000 UTC m=+886.874892850" observedRunningTime="2026-02-16 13:07:40.505929031 +0000 UTC m=+887.882277772" watchObservedRunningTime="2026-02-16 13:07:40.513857338 +0000 UTC m=+887.890206059" Feb 16 13:07:45 crc kubenswrapper[4740]: I0216 13:07:45.575882 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:07:45 crc kubenswrapper[4740]: I0216 13:07:45.576304 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:07:46 crc kubenswrapper[4740]: I0216 13:07:46.543130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-s8wc5" Feb 16 13:07:46 crc kubenswrapper[4740]: I0216 13:07:46.810707 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7" Feb 16 13:07:47 crc kubenswrapper[4740]: I0216 13:07:47.341718 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cd688d8fc-7shgl" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.864616 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.866125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.870278 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.882169 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.882408 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.883196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-f9mjd" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.897416 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.935564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.935620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.995510 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.996822 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:05 crc kubenswrapper[4740]: I0216 13:08:05.998359 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036690 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036907 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.036940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.037000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.038279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.047283 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.062295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"dnsmasq-dns-675f4bcbfc-969lr\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.137976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.138574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.139088 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.154605 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"dnsmasq-dns-78dd6ddcc-vnpwr\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.211019 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.313232 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.653141 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.746777 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.838290 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" event={"ID":"6c71f0c5-67a1-4d67-b2de-dba8295ef084","Type":"ContainerStarted","Data":"5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe"} Feb 16 13:08:06 crc kubenswrapper[4740]: I0216 13:08:06.843632 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" event={"ID":"6a869f8c-c538-49fc-9e00-9b8b4b298687","Type":"ContainerStarted","Data":"72745cfed959172c621e0861c20ad46e7969585903965049f95725965dd4a30e"} Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.529896 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.557725 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.559028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.571793 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.588753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690360 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.690406 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.691488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.692609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.735926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"dnsmasq-dns-666b6646f7-ccvmk\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.880187 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.887793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.907779 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.909233 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:08 crc kubenswrapper[4740]: I0216 13:08:08.924500 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.096514 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.198879 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.198977 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.199026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.200355 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.200390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.218877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"dnsmasq-dns-57d769cc4f-7wlm8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.290203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.419431 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:09 crc kubenswrapper[4740]: W0216 13:08:09.459734 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23232c7f_b058_4eec_850d_b28aecf39a2f.slice/crio-4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72 WatchSource:0}: Error finding container 4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72: Status 404 returned error can't find the container with id 4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72 Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.547450 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:09 crc kubenswrapper[4740]: W0216 13:08:09.556535 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d209d0f_d8e6_4e45_aca9_f1e3245be3f8.slice/crio-e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf WatchSource:0}: Error finding container e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf: Status 404 returned error can't find the container with id e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.729302 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.730565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736344 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736720 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.736892 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.737058 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.737188 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.738888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c72m7" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.740178 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.742191 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.887078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerStarted","Data":"e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf"} Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.888298 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerStarted","Data":"4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72"} Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.909973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910120 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:09 crc kubenswrapper[4740]: I0216 13:08:09.910234 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.014994 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015102 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015118 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015230 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.015274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.018691 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.020527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021753 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.021865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.023690 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.030441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.041707 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.041773 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.042000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.071307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.089558 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.091322 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.095631 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.095981 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.100730 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.100880 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.101312 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.105939 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.106243 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x99bs" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.106976 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.107089 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.221957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222107 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222139 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222182 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222256 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.222314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323342 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323708 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323724 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.323845 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.325576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.325802 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.326113 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.326705 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.327798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.328045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.340521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.360732 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.361257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.361649 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.374580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.375913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.464317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.673712 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.694149 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:08:10 crc kubenswrapper[4740]: I0216 13:08:10.897115 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"934ceceace7365e9c0090e9a012126311d06e3cf25d1f4641361df1885a08c73"} Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.033802 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.376882 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.378446 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.382408 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.383264 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7lmgx" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.384018 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.384622 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.387959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.390065 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558834 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.558951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.559288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660828 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660863 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.660995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.661766 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.662791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kolla-config\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.663177 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.663297 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.666442 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.681479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-config-data-default\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.682210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.687461 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkhz\" (UniqueName: \"kubernetes.io/projected/9b2a3679-b8ef-4221-a9f6-ccd863696aa8-kube-api-access-2pkhz\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.689529 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9b2a3679-b8ef-4221-a9f6-ccd863696aa8\") " pod="openstack/openstack-galera-0" Feb 16 13:08:11 crc kubenswrapper[4740]: I0216 13:08:11.715229 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.591547 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.593161 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.595767 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596139 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kzk2x" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596488 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.596625 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.601138 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780194 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780226 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.780363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.822356 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.824968 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.829715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lbppm" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.830003 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.834031 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.843757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.878889 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.891042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893146 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893238 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.893942 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.895027 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.907037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.909639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.910053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0edd2079-790d-4061-aaf4-4213fe6adc7a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.911101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0edd2079-790d-4061-aaf4-4213fe6adc7a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.916417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmc9\" (UniqueName: \"kubernetes.io/projected/0edd2079-790d-4061-aaf4-4213fe6adc7a-kube-api-access-kqmc9\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.955059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0edd2079-790d-4061-aaf4-4213fe6adc7a\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.957322 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994492 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994581 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:12 crc kubenswrapper[4740]: I0216 13:08:12.994671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.095935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.096688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-config-data\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.097837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.098476 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16622824-15d7-4ff1-8eac-85fe5d8da9db-kolla-config\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.100656 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.102028 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.113236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/16622824-15d7-4ff1-8eac-85fe5d8da9db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.121284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57d4\" (UniqueName: \"kubernetes.io/projected/16622824-15d7-4ff1-8eac-85fe5d8da9db-kube-api-access-z57d4\") pod \"memcached-0\" (UID: \"16622824-15d7-4ff1-8eac-85fe5d8da9db\") " pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.126609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"certified-operators-48mhx\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.154189 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.217285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:13 crc kubenswrapper[4740]: I0216 13:08:13.299310 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.131781 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.133451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.136543 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wpsbn" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.145304 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.243013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.344614 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.368337 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"kube-state-metrics-0\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.456169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575221 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575286 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.575337 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.576060 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.576115 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" gracePeriod=600 Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991612 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" exitCode=0 Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3"} Feb 16 13:08:15 crc kubenswrapper[4740]: I0216 13:08:15.991709 4740 scope.go:117] "RemoveContainer" containerID="147ddc5dfd397eaf37f2485e4f80348a5508133229bd62cf04713fa9d04fd11c" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.411343 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.412855 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.414718 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.416564 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.417418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6p6bf" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.459888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493486 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493654 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493687 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493707 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.493733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.524875 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.526484 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.536702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.594956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595016 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.597734 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.597797 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04335a5d-7cac-4a47-982c-70cae9db69ff-scripts\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595799 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.595926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-log-ovn\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.598670 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04335a5d-7cac-4a47-982c-70cae9db69ff-var-run\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.604157 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-combined-ca-bundle\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.604729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04335a5d-7cac-4a47-982c-70cae9db69ff-ovn-controller-tls-certs\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.628088 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwllx\" (UniqueName: \"kubernetes.io/projected/04335a5d-7cac-4a47-982c-70cae9db69ff-kube-api-access-hwllx\") pod \"ovn-controller-qnt79\" (UID: \"04335a5d-7cac-4a47-982c-70cae9db69ff\") " pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700315 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-run\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700858 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-lib\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.700954 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-var-log\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.701039 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9b2536c4-0b82-4b42-9fe3-20237884d803-etc-ovs\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.702934 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2536c4-0b82-4b42-9fe3-20237884d803-scripts\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.729498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7fv\" (UniqueName: \"kubernetes.io/projected/9b2536c4-0b82-4b42-9fe3-20237884d803-kube-api-access-rg7fv\") pod \"ovn-controller-ovs-crblj\" (UID: \"9b2536c4-0b82-4b42-9fe3-20237884d803\") " pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.819136 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:18 crc kubenswrapper[4740]: I0216 13:08:18.852770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.014190 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"57a77e39696732ba0c2e89d52e10f74cd6c56edebaba2ddd54807982f361b511"} Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.290208 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.291532 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.294618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.295791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.296063 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zf6qv" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.303183 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.304904 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.305107 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409218 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409287 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409308 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.409425 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.510946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511371 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511434 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511545 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.511798 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-config\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.512766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ba53212-5a6f-45cb-9547-cccd4b36aa32-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.516583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.519520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.537599 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfrpj\" (UniqueName: \"kubernetes.io/projected/0ba53212-5a6f-45cb-9547-cccd4b36aa32-kube-api-access-kfrpj\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.539970 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba53212-5a6f-45cb-9547-cccd4b36aa32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.551774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0ba53212-5a6f-45cb-9547-cccd4b36aa32\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:19 crc kubenswrapper[4740]: I0216 13:08:19.633801 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.068248 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.073821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.078706 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.078795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.079193 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hfz5r" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.079238 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.082019 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154236 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.154739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256312 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256342 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.256829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daca8d6b-05ed-4888-9833-9076a4256166-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257021 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.257711 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.258057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daca8d6b-05ed-4888-9833-9076a4256166-config\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.270100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.270220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.274330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4pm\" (UniqueName: \"kubernetes.io/projected/daca8d6b-05ed-4888-9833-9076a4256166-kube-api-access-9v4pm\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.279484 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daca8d6b-05ed-4888-9833-9076a4256166-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.290482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"daca8d6b-05ed-4888-9833-9076a4256166\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:22 crc kubenswrapper[4740]: I0216 13:08:22.395855 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.417838 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.425729 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.430433 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495284 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495344 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.495428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596408 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596972 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.596971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.622029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"redhat-operators-skhl7\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:23 crc kubenswrapper[4740]: I0216 13:08:23.740634 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:26 crc kubenswrapper[4740]: I0216 13:08:26.811942 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.433122 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.434375 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7mmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vnpwr_openstack(6c71f0c5-67a1-4d67-b2de-dba8295ef084): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.435752 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" podUID="6c71f0c5-67a1-4d67-b2de-dba8295ef084" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.467012 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.467204 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npxln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-969lr_openstack(6a869f8c-c538-49fc-9e00-9b8b4b298687): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:08:27 crc kubenswrapper[4740]: E0216 13:08:27.468367 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" podUID="6a869f8c-c538-49fc-9e00-9b8b4b298687" Feb 16 13:08:28 crc kubenswrapper[4740]: I0216 13:08:28.077084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"e26ac5cb88656bf9aee80557f4562f056cae18fedfbcc93ad5c41d158e7fe30d"} Feb 16 13:08:28 crc kubenswrapper[4740]: I0216 13:08:28.931210 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:28.999637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") pod \"6a869f8c-c538-49fc-9e00-9b8b4b298687\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.000102 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") pod \"6a869f8c-c538-49fc-9e00-9b8b4b298687\" (UID: \"6a869f8c-c538-49fc-9e00-9b8b4b298687\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.000661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config" (OuterVolumeSpecName: "config") pod "6a869f8c-c538-49fc-9e00-9b8b4b298687" (UID: "6a869f8c-c538-49fc-9e00-9b8b4b298687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.001277 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a869f8c-c538-49fc-9e00-9b8b4b298687-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.004772 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln" (OuterVolumeSpecName: "kube-api-access-npxln") pod "6a869f8c-c538-49fc-9e00-9b8b4b298687" (UID: "6a869f8c-c538-49fc-9e00-9b8b4b298687"). InnerVolumeSpecName "kube-api-access-npxln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.097675 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.099657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-969lr" event={"ID":"6a869f8c-c538-49fc-9e00-9b8b4b298687","Type":"ContainerDied","Data":"72745cfed959172c621e0861c20ad46e7969585903965049f95725965dd4a30e"} Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.104689 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxln\" (UniqueName: \"kubernetes.io/projected/6a869f8c-c538-49fc-9e00-9b8b4b298687-kube-api-access-npxln\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.108132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" event={"ID":"6c71f0c5-67a1-4d67-b2de-dba8295ef084","Type":"ContainerDied","Data":"5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe"} Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.108171 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5224238b02027848f7d0fad895fde7f066f7a57334c88225bed293830669bbbe" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.123103 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207302 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.207455 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") pod \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\" (UID: \"6c71f0c5-67a1-4d67-b2de-dba8295ef084\") " Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.208295 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config" (OuterVolumeSpecName: "config") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.208732 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.220051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx" (OuterVolumeSpecName: "kube-api-access-t7mmx") pod "6c71f0c5-67a1-4d67-b2de-dba8295ef084" (UID: "6c71f0c5-67a1-4d67-b2de-dba8295ef084"). InnerVolumeSpecName "kube-api-access-t7mmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.233590 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.240929 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-969lr"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313555 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7mmx\" (UniqueName: \"kubernetes.io/projected/6c71f0c5-67a1-4d67-b2de-dba8295ef084-kube-api-access-t7mmx\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313975 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.313991 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c71f0c5-67a1-4d67-b2de-dba8295ef084-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.314172 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a869f8c-c538-49fc-9e00-9b8b4b298687" path="/var/lib/kubelet/pods/6a869f8c-c538-49fc-9e00-9b8b4b298687/volumes" Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.376139 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.385932 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.395232 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.402011 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.440605 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.491646 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2a3679_b8ef_4221_a9f6_ccd863696aa8.slice/crio-fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b WatchSource:0}: Error finding container fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b: Status 404 returned error can't find the container with id fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.495013 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffdca64_bf57_49ca_9d8d_c6c752e59a37.slice/crio-866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d WatchSource:0}: Error finding container 866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d: Status 404 returned error can't find the container with id 866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.581299 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.597317 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.675664 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.755996 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04335a5d_7cac_4a47_982c_70cae9db69ff.slice/crio-2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa WatchSource:0}: Error finding container 2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa: Status 404 returned error can't find the container with id 2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa Feb 16 13:08:29 crc kubenswrapper[4740]: W0216 13:08:29.850146 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba53212_5a6f_45cb_9547_cccd4b36aa32.slice/crio-88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd WatchSource:0}: Error finding container 88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd: Status 404 returned error can't find the container with id 88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd Feb 16 13:08:29 crc kubenswrapper[4740]: I0216 13:08:29.887954 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-crblj"] Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.128162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79" event={"ID":"04335a5d-7cac-4a47-982c-70cae9db69ff","Type":"ContainerStarted","Data":"2c1141ffa9d5e7216afee78d064c4a42b7957154367dfa49a24c40909dec7caa"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.131215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"fe2c96b45e693300c10a72cae7c5527c6304aa177cf3a260602b9aca67a0c10b"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.135330 4740 generic.go:334] "Generic (PLEG): container finished" podID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.135406 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.150292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"88fcc056148622a2374e59e8d1c81ece16923b48f3ac8e63880ff8229d2c17cd"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.158950 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.159020 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.159047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerStarted","Data":"e25ea221b5fa4528f6319f69abf2088a3814b82a1e688ade98fa8da437436a8d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.161717 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"d7a120c44daee76d758cad21793f47e0c8eddcda3951acfaa97024d72c57686d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.164151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"1fd3890eb822343ee419a082a86d7c0f7f37da9e46f4c355fdf04fe11a7d6219"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.166305 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16622824-15d7-4ff1-8eac-85fe5d8da9db","Type":"ContainerStarted","Data":"fd1ec77c679da4c311a1b5aeb0eb5c952452696d2fb82d345cca583a7e0e44ee"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.169154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.171794 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerStarted","Data":"866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.175955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"c2720b066785d1f7aeb61e1aee929c024176a2112c4dc817a63d2876ff085255"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180722 4740 generic.go:334] "Generic (PLEG): container finished" podID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" exitCode=0 Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180790 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vnpwr" Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.180957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f"} Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.272023 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:30 crc kubenswrapper[4740]: I0216 13:08:30.277961 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vnpwr"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.192055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.196168 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930" exitCode=0 Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.196419 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.199222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.300800 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c71f0c5-67a1-4d67-b2de-dba8295ef084" path="/var/lib/kubelet/pods/6c71f0c5-67a1-4d67-b2de-dba8295ef084/volumes" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.404436 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.405772 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.408980 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.417196 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468886 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.468947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469028 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469079 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.469145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571364 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovs-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.571638 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ad1b2300-a42b-4a99-b186-7661bb410a36-ovn-rundir\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.572187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1b2300-a42b-4a99-b186-7661bb410a36-config\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.583284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.591331 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.595784 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad1b2300-a42b-4a99-b186-7661bb410a36-combined-ca-bundle\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.606835 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9df\" (UniqueName: \"kubernetes.io/projected/ad1b2300-a42b-4a99-b186-7661bb410a36-kube-api-access-wp9df\") pod \"ovn-controller-metrics-b4j4m\" (UID: \"ad1b2300-a42b-4a99-b186-7661bb410a36\") " pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.626641 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.628554 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.632627 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.651537 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.683620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.742606 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b4j4m" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.785959 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.786956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.815673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"dnsmasq-dns-5bf47b49b7-c87cl\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.870874 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.912534 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.915403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.920607 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.932583 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988446 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988860 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:31 crc kubenswrapper[4740]: I0216 13:08:31.988892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.016956 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.090270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.092299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.093880 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.094995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.095286 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.115504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"dnsmasq-dns-8554648995-76rjc\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:32 crc kubenswrapper[4740]: I0216 13:08:32.258676 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:37 crc kubenswrapper[4740]: I0216 13:08:37.952123 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.047655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.049338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.051980 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b4j4m"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.060265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.103389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.103719 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.104015 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.142967 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.206897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.207688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.234009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"community-operators-wqc4t\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.259914 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerStarted","Data":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.260098 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" containerID="cri-o://0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" gracePeriod=10 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.260186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.265891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerStarted","Data":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.265839 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" containerID="cri-o://70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" gracePeriod=10 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.266051 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.313696 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" podStartSLOduration=10.964414827 podStartE2EDuration="30.313676422s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:09.560142838 +0000 UTC m=+916.936491569" lastFinishedPulling="2026-02-16 13:08:28.909404443 +0000 UTC m=+936.285753164" observedRunningTime="2026-02-16 13:08:38.311577296 +0000 UTC m=+945.687926027" watchObservedRunningTime="2026-02-16 13:08:38.313676422 +0000 UTC m=+945.690025143" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.316436 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" podStartSLOduration=10.980704879 podStartE2EDuration="30.316429138s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:09.46171547 +0000 UTC m=+916.838064191" lastFinishedPulling="2026-02-16 13:08:28.797439729 +0000 UTC m=+936.173788450" observedRunningTime="2026-02-16 13:08:38.293611781 +0000 UTC m=+945.669960502" watchObservedRunningTime="2026-02-16 13:08:38.316429138 +0000 UTC m=+945.692777859" Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.377133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.558054 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92418e50_20f2_495c_9b06_963a5cd506d1.slice/crio-9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578 WatchSource:0}: Error finding container 9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578: Status 404 returned error can't find the container with id 9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578 Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.566076 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9909a57b_336c_4687_855f_495a78d21af7.slice/crio-ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c WatchSource:0}: Error finding container ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c: Status 404 returned error can't find the container with id ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c Feb 16 13:08:38 crc kubenswrapper[4740]: W0216 13:08:38.569774 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad1b2300_a42b_4a99_b186_7661bb410a36.slice/crio-4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92 WatchSource:0}: Error finding container 4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92: Status 404 returned error can't find the container with id 4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92 Feb 16 13:08:38 crc kubenswrapper[4740]: I0216 13:08:38.971599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.017796 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.018009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.018059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") pod \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\" (UID: \"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.078617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt" (OuterVolumeSpecName: "kube-api-access-rckbt") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "kube-api-access-rckbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.090092 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.119878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.119950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.120069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") pod \"23232c7f-b058-4eec-850d-b28aecf39a2f\" (UID: \"23232c7f-b058-4eec-850d-b28aecf39a2f\") " Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.120402 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rckbt\" (UniqueName: \"kubernetes.io/projected/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-kube-api-access-rckbt\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.146847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h" (OuterVolumeSpecName: "kube-api-access-xlv5h") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "kube-api-access-xlv5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.222049 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlv5h\" (UniqueName: \"kubernetes.io/projected/23232c7f-b058-4eec-850d-b28aecf39a2f-kube-api-access-xlv5h\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.228859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.281854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.297952 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.324232 4740 generic.go:334] "Generic (PLEG): container finished" podID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.324448 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.354438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.354707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config" (OuterVolumeSpecName: "config") pod "23232c7f-b058-4eec-850d-b28aecf39a2f" (UID: "23232c7f-b058-4eec-850d-b28aecf39a2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.372421 4740 generic.go:334] "Generic (PLEG): container finished" podID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" exitCode=0 Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.372561 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.415164 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433063 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433091 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23232c7f-b058-4eec-850d-b28aecf39a2f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.433101 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.435579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config" (OuterVolumeSpecName: "config") pod "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" (UID: "4d209d0f-d8e6-4e45-aca9-f1e3245be3f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.436801 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.728440587 podStartE2EDuration="27.436786132s" podCreationTimestamp="2026-02-16 13:08:12 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.485217931 +0000 UTC m=+936.861566652" lastFinishedPulling="2026-02-16 13:08:37.193563466 +0000 UTC m=+944.569912197" observedRunningTime="2026-02-16 13:08:39.435915445 +0000 UTC m=+946.812264166" watchObservedRunningTime="2026-02-16 13:08:39.436786132 +0000 UTC m=+946.813134853" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529616 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529976 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.529997 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerStarted","Data":"6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530018 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ccvmk" event={"ID":"23232c7f-b058-4eec-850d-b28aecf39a2f","Type":"ContainerDied","Data":"4428fc13eb34f9023844cc127231859b8a3b24009a896cd3317e7735b1465f72"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530039 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b4j4m" event={"ID":"ad1b2300-a42b-4a99-b186-7661bb410a36","Type":"ContainerStarted","Data":"4ac73e637d8794dca938b293d35a65541f42b6bf655c926cc2e9a3f1c3a43d92"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530059 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7wlm8" event={"ID":"4d209d0f-d8e6-4e45-aca9-f1e3245be3f8","Type":"ContainerDied","Data":"e3b33a27e9f185d3c8746ab220eb49acc501a2fe3217ba858f1fea1221ed0edf"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530068 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerStarted","Data":"9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"16622824-15d7-4ff1-8eac-85fe5d8da9db","Type":"ContainerStarted","Data":"41fe440b607b3e581c040bce88540e4c075d11a1b13113192d67d650946ced0f"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530095 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerStarted","Data":"ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c"} Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.530124 4740 scope.go:117] "RemoveContainer" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.535739 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.555699 4740 scope.go:117] "RemoveContainer" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.594728 4740 scope.go:117] "RemoveContainer" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.595296 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": container with ID starting with 0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2 not found: ID does not exist" containerID="0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595358 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2"} err="failed to get container status \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": rpc error: code = NotFound desc = could not find container \"0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2\": container with ID starting with 0821bfb91851fa28f74c93a3afda2c567094d0cec2023eb40266682ce375b1c2 not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595409 4740 scope.go:117] "RemoveContainer" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.595768 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": container with ID starting with 3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f not found: ID does not exist" containerID="3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595796 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f"} err="failed to get container status \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": rpc error: code = NotFound desc = could not find container \"3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f\": container with ID starting with 3247fd41be7b4c1cfd388c6378ad5b838729fe36f7ed8839635cdf8b6782392f not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.595829 4740 scope.go:117] "RemoveContainer" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.637616 4740 scope.go:117] "RemoveContainer" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.672080 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.698880 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ccvmk"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.711685 4740 scope.go:117] "RemoveContainer" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.715925 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": container with ID starting with 70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c not found: ID does not exist" containerID="70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.716074 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c"} err="failed to get container status \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": rpc error: code = NotFound desc = could not find container \"70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c\": container with ID starting with 70a7385338c5eb55dbdd501ea53db10ecf301cfa3db660077e347e014f07db9c not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.716163 4740 scope.go:117] "RemoveContainer" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: E0216 13:08:39.718084 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": container with ID starting with cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f not found: ID does not exist" containerID="cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.718198 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f"} err="failed to get container status \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": rpc error: code = NotFound desc = could not find container \"cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f\": container with ID starting with cd41b26aabe7535a2ff8d2310ce8b640c7aefe3a05914516b53f13f812b5ce6f not found: ID does not exist" Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.757207 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:39 crc kubenswrapper[4740]: I0216 13:08:39.766975 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7wlm8"] Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.413496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerStarted","Data":"393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.413581 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.415758 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b2536c4-0b82-4b42-9fe3-20237884d803" containerID="216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.415832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerDied","Data":"216c4948a4c881dfe67ae6cb7f8b7f19445793dbd5d5d9ccf750393f440da614"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.419223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"53fa5ab99d61461f0532c2a5ac09966c06c0389b14884e3296fd140f78453484"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.422962 4740 generic.go:334] "Generic (PLEG): container finished" podID="92418e50-20f2-495c-9b06-963a5cd506d1" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.423096 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.431091 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.217727926 podStartE2EDuration="25.431071289s" podCreationTimestamp="2026-02-16 13:08:15 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.516873097 +0000 UTC m=+936.893221808" lastFinishedPulling="2026-02-16 13:08:38.73021645 +0000 UTC m=+946.106565171" observedRunningTime="2026-02-16 13:08:40.430323095 +0000 UTC m=+947.806671866" watchObservedRunningTime="2026-02-16 13:08:40.431071289 +0000 UTC m=+947.807420010" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.432896 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.433083 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.438338 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.441711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"d9d1e853b1d7660fcf74e6807238d3d53eb29720fa3698242837712fb1a7222b"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.445010 4740 generic.go:334] "Generic (PLEG): container finished" podID="9909a57b-336c-4687-855f-495a78d21af7" containerID="da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.445091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.446695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79" event={"ID":"04335a5d-7cac-4a47-982c-70cae9db69ff","Type":"ContainerStarted","Data":"7532e37515439a8c65a2865febec5f9223ab31c4950f64ee6aaa622be3c70b96"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.447241 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qnt79" Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.448717 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b" exitCode=0 Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.448765 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b"} Feb 16 13:08:40 crc kubenswrapper[4740]: I0216 13:08:40.513252 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qnt79" podStartSLOduration=14.575198163 podStartE2EDuration="22.513230205s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.760102651 +0000 UTC m=+937.136451372" lastFinishedPulling="2026-02-16 13:08:37.698134693 +0000 UTC m=+945.074483414" observedRunningTime="2026-02-16 13:08:40.505429639 +0000 UTC m=+947.881778390" watchObservedRunningTime="2026-02-16 13:08:40.513230205 +0000 UTC m=+947.889578936" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.293298 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" path="/var/lib/kubelet/pods/23232c7f-b058-4eec-850d-b28aecf39a2f/volumes" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.294354 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" path="/var/lib/kubelet/pods/4d209d0f-d8e6-4e45-aca9-f1e3245be3f8/volumes" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.458659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerStarted","Data":"23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.463680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0ba53212-5a6f-45cb-9547-cccd4b36aa32","Type":"ContainerStarted","Data":"92ed8039da0d38e6f99d32d89d6cf65a131922ccb0a744b0a5e702afc7d26096"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.486961 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"eab27ce5945b5192b4ad81108f9607e53e3ef1f7924c8c2a91b27de1c6d9272d"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.501554 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skhl7" podStartSLOduration=10.154693358 podStartE2EDuration="18.501533673s" podCreationTimestamp="2026-02-16 13:08:23 +0000 UTC" firstStartedPulling="2026-02-16 13:08:32.837040332 +0000 UTC m=+940.213389063" lastFinishedPulling="2026-02-16 13:08:41.183880647 +0000 UTC m=+948.560229378" observedRunningTime="2026-02-16 13:08:41.486650664 +0000 UTC m=+948.862999385" watchObservedRunningTime="2026-02-16 13:08:41.501533673 +0000 UTC m=+948.877882394" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.504879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b4j4m" event={"ID":"ad1b2300-a42b-4a99-b186-7661bb410a36","Type":"ContainerStarted","Data":"ec2555abb5b3978f7b8ce2a60bbc64687f783d531491b0850e6af9047c2fedaa"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.514091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerStarted","Data":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.514320 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.432316983 podStartE2EDuration="23.514299004s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.852590081 +0000 UTC m=+937.228938802" lastFinishedPulling="2026-02-16 13:08:40.934572102 +0000 UTC m=+948.310920823" observedRunningTime="2026-02-16 13:08:41.512163247 +0000 UTC m=+948.888511968" watchObservedRunningTime="2026-02-16 13:08:41.514299004 +0000 UTC m=+948.890647725" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.520169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"daca8d6b-05ed-4888-9833-9076a4256166","Type":"ContainerStarted","Data":"d4fd34567da2277d7ec91c02b54038023b063a0cd5104da46f6104a0ca4de543"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.522599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerStarted","Data":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.523907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.543753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerStarted","Data":"1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878"} Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.543826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.562570 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b4j4m" podStartSLOduration=8.231317947 podStartE2EDuration="10.562547053s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="2026-02-16 13:08:38.599096933 +0000 UTC m=+945.975445654" lastFinishedPulling="2026-02-16 13:08:40.930326039 +0000 UTC m=+948.306674760" observedRunningTime="2026-02-16 13:08:41.558727382 +0000 UTC m=+948.935076103" watchObservedRunningTime="2026-02-16 13:08:41.562547053 +0000 UTC m=+948.938895774" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.584220 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podStartSLOduration=10.584204624 podStartE2EDuration="10.584204624s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:41.583319547 +0000 UTC m=+948.959668278" watchObservedRunningTime="2026-02-16 13:08:41.584204624 +0000 UTC m=+948.960553345" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.623474 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48mhx" podStartSLOduration=19.504143029 podStartE2EDuration="29.623452229s" podCreationTimestamp="2026-02-16 13:08:12 +0000 UTC" firstStartedPulling="2026-02-16 13:08:30.297388897 +0000 UTC m=+937.673737618" lastFinishedPulling="2026-02-16 13:08:40.416698087 +0000 UTC m=+947.793046818" observedRunningTime="2026-02-16 13:08:41.606571658 +0000 UTC m=+948.982920389" watchObservedRunningTime="2026-02-16 13:08:41.623452229 +0000 UTC m=+948.999800950" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.635613 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-76rjc" podStartSLOduration=10.635593661 podStartE2EDuration="10.635593661s" podCreationTimestamp="2026-02-16 13:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:41.633529827 +0000 UTC m=+949.009878548" watchObservedRunningTime="2026-02-16 13:08:41.635593661 +0000 UTC m=+949.011942382" Feb 16 13:08:41 crc kubenswrapper[4740]: I0216 13:08:41.663583 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.203487053 podStartE2EDuration="20.663562811s" podCreationTimestamp="2026-02-16 13:08:21 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.491644354 +0000 UTC m=+936.867993085" lastFinishedPulling="2026-02-16 13:08:40.951720122 +0000 UTC m=+948.328068843" observedRunningTime="2026-02-16 13:08:41.662722885 +0000 UTC m=+949.039071606" watchObservedRunningTime="2026-02-16 13:08:41.663562811 +0000 UTC m=+949.039911532" Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.397130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.551583 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1" exitCode=0 Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.551686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1"} Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.557289 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-crblj" event={"ID":"9b2536c4-0b82-4b42-9fe3-20237884d803","Type":"ContainerStarted","Data":"1865a9e30345472085877099f6bae95223a69be3c4d585be9e24425096b09e26"} Feb 16 13:08:42 crc kubenswrapper[4740]: I0216 13:08:42.600988 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-crblj" podStartSLOduration=17.253447888 podStartE2EDuration="24.600961988s" podCreationTimestamp="2026-02-16 13:08:18 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.995086445 +0000 UTC m=+937.371435176" lastFinishedPulling="2026-02-16 13:08:37.342600555 +0000 UTC m=+944.718949276" observedRunningTime="2026-02-16 13:08:42.596391775 +0000 UTC m=+949.972740506" watchObservedRunningTime="2026-02-16 13:08:42.600961988 +0000 UTC m=+949.977310719" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.159081 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.299489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.299540 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.348731 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.397650 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.436585 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerStarted","Data":"9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928"} Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566612 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.566723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.585386 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqc4t" podStartSLOduration=3.442635229 podStartE2EDuration="5.585367963s" podCreationTimestamp="2026-02-16 13:08:38 +0000 UTC" firstStartedPulling="2026-02-16 13:08:40.791361856 +0000 UTC m=+948.167710567" lastFinishedPulling="2026-02-16 13:08:42.93409458 +0000 UTC m=+950.310443301" observedRunningTime="2026-02-16 13:08:43.583407112 +0000 UTC m=+950.959755833" watchObservedRunningTime="2026-02-16 13:08:43.585367963 +0000 UTC m=+950.961716684" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.634441 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.679686 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.742935 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:43 crc kubenswrapper[4740]: I0216 13:08:43.742993 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.573106 4740 generic.go:334] "Generic (PLEG): container finished" podID="0edd2079-790d-4061-aaf4-4213fe6adc7a" containerID="657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28" exitCode=0 Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.573158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerDied","Data":"657317f31c1050ec69af0b3e23dd72006ff7b6d9c80a87b62faa23e6ceaa2d28"} Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.586596 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b2a3679-b8ef-4221-a9f6-ccd863696aa8" containerID="62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8" exitCode=0 Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.587359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerDied","Data":"62dc156e2ff56e656b16dd1a559de7911f6d6db8b4776d5d63e1362b33d3e7f8"} Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.588055 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.666476 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.721276 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.797432 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skhl7" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" probeResult="failure" output=< Feb 16 13:08:44 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:08:44 crc kubenswrapper[4740]: > Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.968801 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969168 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969185 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969211 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969217 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969233 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969240 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: E0216 13:08:44.969257 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969262 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="init" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="23232c7f-b058-4eec-850d-b28aecf39a2f" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.969418 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d209d0f-d8e6-4e45-aca9-f1e3245be3f8" containerName="dnsmasq-dns" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.971481 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.973439 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.973704 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.974042 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-x7z9r" Feb 16 13:08:44 crc kubenswrapper[4740]: I0216 13:08:44.974096 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.020754 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057504 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.057644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159189 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159254 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.159292 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.163063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-config\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.167667 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-scripts\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.168176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.192055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.195982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.201483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.207700 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljm8d\" (UniqueName: \"kubernetes.io/projected/d4f80435-6b1f-45e1-bc0c-ff150bd3b33b-kube-api-access-ljm8d\") pod \"ovn-northd-0\" (UID: \"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b\") " pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.292662 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.518339 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.606978 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.607246 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" containerID="cri-o://1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" gracePeriod=10 Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.621800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9b2a3679-b8ef-4221-a9f6-ccd863696aa8","Type":"ContainerStarted","Data":"4ab60d1f2e380b2a88dbc22d2539c75daba3b4561b478d6bd7f1bc4cc49af524"} Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.675870 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.677502 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.680116 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.534704061 podStartE2EDuration="35.680097168s" podCreationTimestamp="2026-02-16 13:08:10 +0000 UTC" firstStartedPulling="2026-02-16 13:08:29.508660429 +0000 UTC m=+936.885009140" lastFinishedPulling="2026-02-16 13:08:37.654053526 +0000 UTC m=+945.030402247" observedRunningTime="2026-02-16 13:08:45.675122101 +0000 UTC m=+953.051470832" watchObservedRunningTime="2026-02-16 13:08:45.680097168 +0000 UTC m=+953.056445879" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.788701 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884241 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.884505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986198 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986246 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986289 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.986387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:45 crc kubenswrapper[4740]: I0216 13:08:45.987963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.006071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"dnsmasq-dns-b8fbc5445-g5hv2\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.059749 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: W0216 13:08:46.065447 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4f80435_6b1f_45e1_bc0c_ff150bd3b33b.slice/crio-a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb WatchSource:0}: Error finding container a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb: Status 404 returned error can't find the container with id a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.085846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.522650 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:08:46 crc kubenswrapper[4740]: W0216 13:08:46.530871 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb414a4c4_7799_4c49_9aa9_5718c2e5855f.slice/crio-80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70 WatchSource:0}: Error finding container 80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70: Status 404 returned error can't find the container with id 80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70 Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.622852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerStarted","Data":"80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.624621 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"a3d821a1b6b3cc267c4ee7b7d307c31c520e811e8e2228990c972b36d6dcd1bb"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.626938 4740 generic.go:334] "Generic (PLEG): container finished" podID="9909a57b-336c-4687-855f-495a78d21af7" containerID="1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" exitCode=0 Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.627016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878"} Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.693050 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.699106 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701442 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zgr9s" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701727 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.701999 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.702177 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.719999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.798775 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.799198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900501 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900827 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900872 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: E0216 13:08:46.900961 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:47.400939423 +0000 UTC m=+954.777288144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.900976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901064 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-cache\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.901938 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.902203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8953d6de-24a5-4645-b270-2bbafe5b17c5-lock\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.902247 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.907055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8953d6de-24a5-4645-b270-2bbafe5b17c5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.933332 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbh24\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-kube-api-access-sbh24\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:46 crc kubenswrapper[4740]: I0216 13:08:46.934259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.024653 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.261083 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.383436 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.384627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.387576 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.387964 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.388159 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.393902 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.415283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416693 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416751 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: E0216 13:08:47.416852 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:48.416786935 +0000 UTC m=+955.793135666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517126 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517200 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517227 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517255 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.517299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619376 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.619396 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.620206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.620829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.621130 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.625201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.625504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.628189 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.650629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"swift-ring-rebalance-4rgvg\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.677878 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.679940 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.719774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.720918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.720964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.721097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.730944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.826771 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.827719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:47 crc kubenswrapper[4740]: I0216 13:08:47.862801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"redhat-marketplace-mnb45\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.105369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.211275 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4rgvg"] Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.377770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.378910 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.439617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.439961 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.439981 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: E0216 13:08:48.440040 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:50.440021773 +0000 UTC m=+957.816370494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.468430 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.575801 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642657 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642730 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642845 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.642894 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") pod \"9909a57b-336c-4687-855f-495a78d21af7\" (UID: \"9909a57b-336c-4687-855f-495a78d21af7\") " Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.659129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9" (OuterVolumeSpecName: "kube-api-access-sdxj9") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "kube-api-access-sdxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.695211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0edd2079-790d-4061-aaf4-4213fe6adc7a","Type":"ContainerStarted","Data":"21f7cc3c3fdf74d8fbf24eb65a92a49c96366aae04d5d936ec32816f74b0afe3"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.696041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config" (OuterVolumeSpecName: "config") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.697987 4740 generic.go:334] "Generic (PLEG): container finished" podID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" exitCode=0 Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.698075 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707239 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707355 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-c87cl" event={"ID":"9909a57b-336c-4687-855f-495a78d21af7","Type":"ContainerDied","Data":"ba54eead2dab21215317f93298356fea3b351391f33454128a61a0ee8e86082c"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.707416 4740 scope.go:117] "RemoveContainer" containerID="1d33c0ee4e7ac8c8f775d8a370c2c1dc54afcf20eed20a45abe7ffb6694bb878" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.711562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerStarted","Data":"d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9"} Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.713026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.727781 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.728441 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.896893884 podStartE2EDuration="37.728423518s" podCreationTimestamp="2026-02-16 13:08:11 +0000 UTC" firstStartedPulling="2026-02-16 13:08:27.458200478 +0000 UTC m=+934.834549199" lastFinishedPulling="2026-02-16 13:08:37.289730102 +0000 UTC m=+944.666078833" observedRunningTime="2026-02-16 13:08:48.720895011 +0000 UTC m=+956.097243742" watchObservedRunningTime="2026-02-16 13:08:48.728423518 +0000 UTC m=+956.104772229" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.728551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9909a57b-336c-4687-855f-495a78d21af7" (UID: "9909a57b-336c-4687-855f-495a78d21af7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.750014 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdxj9\" (UniqueName: \"kubernetes.io/projected/9909a57b-336c-4687-855f-495a78d21af7-kube-api-access-sdxj9\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.753322 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.754722 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.755326 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9909a57b-336c-4687-855f-495a78d21af7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:48 crc kubenswrapper[4740]: I0216 13:08:48.774045 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.048868 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.056026 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-c87cl"] Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.059118 4740 scope.go:117] "RemoveContainer" containerID="da3a287d7891a787094b6dfd14fda8d39fc078ebe1345f09ce6fe824a1be10a9" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.300491 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9909a57b-336c-4687-855f-495a78d21af7" path="/var/lib/kubelet/pods/9909a57b-336c-4687-855f-495a78d21af7/volumes" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.720251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerStarted","Data":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.721938 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.723825 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"58790eae7c5c223749a5954ba961d9322038e0165accfb474f5b3b920e9e468e"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.723946 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d4f80435-6b1f-45e1-bc0c-ff150bd3b33b","Type":"ContainerStarted","Data":"ae0c5c22153eb033a01b4580865b4192b26c0d7dea01338d02f3551ba3861103"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.725068 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.730390 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" exitCode=0 Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.731440 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.731506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerStarted","Data":"674fc74e32c41001843c95fbf70084ec6f8d9862901843c0b561d5e3afe04969"} Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.748275 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podStartSLOduration=4.748258199 podStartE2EDuration="4.748258199s" podCreationTimestamp="2026-02-16 13:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:49.743788298 +0000 UTC m=+957.120137029" watchObservedRunningTime="2026-02-16 13:08:49.748258199 +0000 UTC m=+957.124606920" Feb 16 13:08:49 crc kubenswrapper[4740]: I0216 13:08:49.792663 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7654319689999998 podStartE2EDuration="5.792644825s" podCreationTimestamp="2026-02-16 13:08:44 +0000 UTC" firstStartedPulling="2026-02-16 13:08:46.068274252 +0000 UTC m=+953.444622973" lastFinishedPulling="2026-02-16 13:08:49.095487108 +0000 UTC m=+956.471835829" observedRunningTime="2026-02-16 13:08:49.785951785 +0000 UTC m=+957.162300506" watchObservedRunningTime="2026-02-16 13:08:49.792644825 +0000 UTC m=+957.168993546" Feb 16 13:08:50 crc kubenswrapper[4740]: I0216 13:08:50.494610 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.494821 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.495037 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.495101 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:08:54.495082948 +0000 UTC m=+961.871431659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:50 crc kubenswrapper[4740]: E0216 13:08:50.652394 4740 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.147:34142->38.102.83.147:36137: read tcp 38.102.83.147:34142->38.102.83.147:36137: read: connection reset by peer Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.416623 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.726240 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.726294 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.751798 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqc4t" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" containerID="cri-o://9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" gracePeriod=2 Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.825289 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 13:08:51 crc kubenswrapper[4740]: I0216 13:08:51.923367 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 13:08:52 crc kubenswrapper[4740]: I0216 13:08:52.759737 4740 generic.go:334] "Generic (PLEG): container finished" podID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerID="9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" exitCode=0 Feb 16 13:08:52 crc kubenswrapper[4740]: I0216 13:08:52.759846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928"} Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.218338 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.218690 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.324489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.354243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.424445 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:53 crc kubenswrapper[4740]: E0216 13:08:53.425869 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="init" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.425945 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="init" Feb 16 13:08:53 crc kubenswrapper[4740]: E0216 13:08:53.426016 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426069 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426330 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9909a57b-336c-4687-855f-495a78d21af7" containerName="dnsmasq-dns" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.426974 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.434736 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.440911 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.441078 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.477638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.478781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.480683 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.491147 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542100 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.542477 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.543684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.570825 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"glance-db-create-nxmdt\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.644509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.644915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.645496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.673391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"glance-8cb8-account-create-update-dgv8s\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.758773 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.789502 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.800858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.835289 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:53 crc kubenswrapper[4740]: I0216 13:08:53.844295 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.049306 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.050695 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.056438 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.116730 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.117731 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.122765 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.131251 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.157996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.158306 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.259976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260050 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260222 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.260956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.261199 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.280548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"keystone-7989-account-create-update-s6gss\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.281677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"keystone-db-create-9mvdt\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.318003 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.319067 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.332599 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.361136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.361185 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.374131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.433629 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.439829 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.441171 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.443554 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.463619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464248 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.464377 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.465438 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.486405 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.505903 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"placement-db-create-9v664\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.565758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566009 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566032 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: E0216 13:08:54.566091 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift podName:8953d6de-24a5-4645-b270-2bbafe5b17c5 nodeName:}" failed. No retries permitted until 2026-02-16 13:09:02.566075619 +0000 UTC m=+969.942424330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift") pod "swift-storage-0" (UID: "8953d6de-24a5-4645-b270-2bbafe5b17c5") : configmap "swift-ring-files" not found Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.566643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.587527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"placement-4b9b-account-create-update-njhb7\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.666575 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.771896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.873779 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973052 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.973080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") pod \"dde0147a-01d8-430b-a230-9d8bdfffeadd\" (UID: \"dde0147a-01d8-430b-a230-9d8bdfffeadd\") " Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.977672 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities" (OuterVolumeSpecName: "utilities") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:54 crc kubenswrapper[4740]: I0216 13:08:54.988017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz" (OuterVolumeSpecName: "kube-api-access-mfwsz") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "kube-api-access-mfwsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.058590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dde0147a-01d8-430b-a230-9d8bdfffeadd" (UID: "dde0147a-01d8-430b-a230-9d8bdfffeadd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.075980 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.076023 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfwsz\" (UniqueName: \"kubernetes.io/projected/dde0147a-01d8-430b-a230-9d8bdfffeadd-kube-api-access-mfwsz\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.076042 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde0147a-01d8-430b-a230-9d8bdfffeadd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.209865 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:08:55 crc kubenswrapper[4740]: W0216 13:08:55.334165 4740 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/pids.max: no such device Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.382135 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.400900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.413295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:08:55 crc kubenswrapper[4740]: E0216 13:08:55.557333 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0147a_01d8_430b_a230_9d8bdfffeadd.slice/crio-6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85\": RecentStats: unable to find data in memory cache]" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.565206 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.591973 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.800728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerStarted","Data":"4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.800780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerStarted","Data":"e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.804252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerStarted","Data":"a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.804293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerStarted","Data":"94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.810721 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerStarted","Data":"b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.822860 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7989-account-create-update-s6gss" podStartSLOduration=1.822844905 podStartE2EDuration="1.822844905s" podCreationTimestamp="2026-02-16 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.815306018 +0000 UTC m=+963.191654739" watchObservedRunningTime="2026-02-16 13:08:55.822844905 +0000 UTC m=+963.199193626" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825783 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqc4t" event={"ID":"dde0147a-01d8-430b-a230-9d8bdfffeadd","Type":"ContainerDied","Data":"6d8ddc766e620293e4ce19966be3eade1f701ea7f4bc85aaa8dbabf74209de85"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825877 4740 scope.go:117] "RemoveContainer" containerID="9bfa3be63dee77789c81c4f5ee3f7754049b2872bb9daa83211f215863713928" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.825999 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqc4t" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.828109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerStarted","Data":"ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834005 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" exitCode=0 Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834060 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-nxmdt" podStartSLOduration=2.834049517 podStartE2EDuration="2.834049517s" podCreationTimestamp="2026-02-16 13:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.831474096 +0000 UTC m=+963.207822837" watchObservedRunningTime="2026-02-16 13:08:55.834049517 +0000 UTC m=+963.210398238" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.834315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.846708 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerStarted","Data":"56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.849040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerStarted","Data":"90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.849071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerStarted","Data":"f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.852649 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4rgvg" podStartSLOduration=2.467683 podStartE2EDuration="8.852634943s" podCreationTimestamp="2026-02-16 13:08:47 +0000 UTC" firstStartedPulling="2026-02-16 13:08:48.275029741 +0000 UTC m=+955.651378462" lastFinishedPulling="2026-02-16 13:08:54.659981684 +0000 UTC m=+962.036330405" observedRunningTime="2026-02-16 13:08:55.846942314 +0000 UTC m=+963.223291035" watchObservedRunningTime="2026-02-16 13:08:55.852634943 +0000 UTC m=+963.228983654" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.854095 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerStarted","Data":"494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.854171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerStarted","Data":"47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db"} Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.855488 4740 scope.go:117] "RemoveContainer" containerID="d47df2f9f4e292409c655fe7591c226fe30607c4e0638aae9e313eb8f50fbed1" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.881707 4740 scope.go:117] "RemoveContainer" containerID="6c1987d37eda48bc957fc38cb1ada6838a4e15b9f5f27415b2d688878ac2b28b" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.891056 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8cb8-account-create-update-dgv8s" podStartSLOduration=2.8910382610000003 podStartE2EDuration="2.891038261s" podCreationTimestamp="2026-02-16 13:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.89068213 +0000 UTC m=+963.267030851" watchObservedRunningTime="2026-02-16 13:08:55.891038261 +0000 UTC m=+963.267386982" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.912200 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9mvdt" podStartSLOduration=1.912179596 podStartE2EDuration="1.912179596s" podCreationTimestamp="2026-02-16 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:08:55.907454327 +0000 UTC m=+963.283803048" watchObservedRunningTime="2026-02-16 13:08:55.912179596 +0000 UTC m=+963.288528317" Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.932280 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:55 crc kubenswrapper[4740]: I0216 13:08:55.935742 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqc4t"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.021145 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.021641 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48mhx" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" containerID="cri-o://7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" gracePeriod=2 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.088096 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.147032 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.147589 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-76rjc" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" containerID="cri-o://8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" gracePeriod=10 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.221078 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.221303 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skhl7" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" containerID="cri-o://23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" gracePeriod=2 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.618954 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.732901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.732947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.733024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") pod \"1e068ce5-e7a1-430c-97f7-fed550912288\" (UID: \"1e068ce5-e7a1-430c-97f7-fed550912288\") " Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.734469 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities" (OuterVolumeSpecName: "utilities") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.742012 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8" (OuterVolumeSpecName: "kube-api-access-gxgc8") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "kube-api-access-gxgc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.796188 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e068ce5-e7a1-430c-97f7-fed550912288" (UID: "1e068ce5-e7a1-430c-97f7-fed550912288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835208 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835240 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxgc8\" (UniqueName: \"kubernetes.io/projected/1e068ce5-e7a1-430c-97f7-fed550912288-kube-api-access-gxgc8\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.835254 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e068ce5-e7a1-430c-97f7-fed550912288-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.866136 4740 generic.go:334] "Generic (PLEG): container finished" podID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerID="494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.866429 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerDied","Data":"494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.870088 4740 generic.go:334] "Generic (PLEG): container finished" podID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerID="4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.870147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerDied","Data":"4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.872282 4740 generic.go:334] "Generic (PLEG): container finished" podID="aca31aa1-429e-4f65-acd5-8896734d0713" containerID="23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.872344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.874430 4740 generic.go:334] "Generic (PLEG): container finished" podID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerID="373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.874497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerDied","Data":"373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.877238 4740 generic.go:334] "Generic (PLEG): container finished" podID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerID="90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.877299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerDied","Data":"90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.881601 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.896863 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e068ce5-e7a1-430c-97f7-fed550912288" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.896914 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48mhx" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.897010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.898002 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48mhx" event={"ID":"1e068ce5-e7a1-430c-97f7-fed550912288","Type":"ContainerDied","Data":"e25ea221b5fa4528f6319f69abf2088a3814b82a1e688ade98fa8da437436a8d"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.898046 4740 scope.go:117] "RemoveContainer" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.900554 4740 generic.go:334] "Generic (PLEG): container finished" podID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerID="a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.900620 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerDied","Data":"a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938398 4740 generic.go:334] "Generic (PLEG): container finished" podID="92418e50-20f2-495c-9b06-963a5cd506d1" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938498 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-76rjc" event={"ID":"92418e50-20f2-495c-9b06-963a5cd506d1","Type":"ContainerDied","Data":"9fcab72dfdae204ac7d5555a0d11d6020c89a6deabe72c826aa4804ab6026578"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.938665 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-76rjc" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.948573 4740 generic.go:334] "Generic (PLEG): container finished" podID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerID="baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518" exitCode=0 Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.954057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerDied","Data":"baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518"} Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.965626 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:56 crc kubenswrapper[4740]: I0216 13:08:56.971077 4740 scope.go:117] "RemoveContainer" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.025988 4740 scope.go:117] "RemoveContainer" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.053774 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054162 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054480 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") pod \"aca31aa1-429e-4f65-acd5-8896734d0713\" (UID: \"aca31aa1-429e-4f65-acd5-8896734d0713\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054616 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.054757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") pod \"92418e50-20f2-495c-9b06-963a5cd506d1\" (UID: \"92418e50-20f2-495c-9b06-963a5cd506d1\") " Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.070234 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd" (OuterVolumeSpecName: "kube-api-access-gndhd") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "kube-api-access-gndhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.071379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities" (OuterVolumeSpecName: "utilities") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.076034 4740 scope.go:117] "RemoveContainer" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.081204 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": container with ID starting with 7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37 not found: ID does not exist" containerID="7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.081486 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37"} err="failed to get container status \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": rpc error: code = NotFound desc = could not find container \"7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37\": container with ID starting with 7bc89f186ead438638813340ceb29d8e87b8ddfb1907b8b5802b49e6316f6d37 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.081631 4740 scope.go:117] "RemoveContainer" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.082563 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": container with ID starting with ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7 not found: ID does not exist" containerID="ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.082635 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7"} err="failed to get container status \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": rpc error: code = NotFound desc = could not find container \"ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7\": container with ID starting with ba3dd21d1e909ca9b44c4b72e597d6ffee98c18cca0b40949c3b4d9a23a7d3b7 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.082661 4740 scope.go:117] "RemoveContainer" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.083331 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": container with ID starting with 72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9 not found: ID does not exist" containerID="72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.083373 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9"} err="failed to get container status \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": rpc error: code = NotFound desc = could not find container \"72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9\": container with ID starting with 72e7bf86759fa4aeb46e8e55837d501a16cb3f8c08a42c2eb150906ab5f845e9 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.083390 4740 scope.go:117] "RemoveContainer" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.085098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl" (OuterVolumeSpecName: "kube-api-access-ww4jl") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "kube-api-access-ww4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.105970 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.113152 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.125303 4740 scope.go:117] "RemoveContainer" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.130910 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config" (OuterVolumeSpecName: "config") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.138994 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.142545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92418e50-20f2-495c-9b06-963a5cd506d1" (UID: "92418e50-20f2-495c-9b06-963a5cd506d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.150866 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48mhx"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156136 4740 scope.go:117] "RemoveContainer" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.156606 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": container with ID starting with 8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a not found: ID does not exist" containerID="8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156658 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a"} err="failed to get container status \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": rpc error: code = NotFound desc = could not find container \"8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a\": container with ID starting with 8e6c38689ba0b5da0a1fe1c2d733f5693f82e367b9ad455d0c88c92bd003aa3a not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.156691 4740 scope.go:117] "RemoveContainer" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157047 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157077 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157086 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: E0216 13:08:57.157020 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": container with ID starting with 6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302 not found: ID does not exist" containerID="6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157126 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302"} err="failed to get container status \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": rpc error: code = NotFound desc = could not find container \"6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302\": container with ID starting with 6fed4deefd400f8352b7d2d2ee8a883f762b7425a3b2728598eca951ca2aa302 not found: ID does not exist" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157098 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157515 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92418e50-20f2-495c-9b06-963a5cd506d1-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157546 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4jl\" (UniqueName: \"kubernetes.io/projected/aca31aa1-429e-4f65-acd5-8896734d0713-kube-api-access-ww4jl\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.157573 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gndhd\" (UniqueName: \"kubernetes.io/projected/92418e50-20f2-495c-9b06-963a5cd506d1-kube-api-access-gndhd\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.230667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aca31aa1-429e-4f65-acd5-8896734d0713" (UID: "aca31aa1-429e-4f65-acd5-8896734d0713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.258570 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca31aa1-429e-4f65-acd5-8896734d0713-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.277483 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.290610 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" path="/var/lib/kubelet/pods/1e068ce5-e7a1-430c-97f7-fed550912288/volumes" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.291260 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" path="/var/lib/kubelet/pods/dde0147a-01d8-430b-a230-9d8bdfffeadd/volumes" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.291970 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-76rjc"] Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.959729 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skhl7" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.959786 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skhl7" event={"ID":"aca31aa1-429e-4f65-acd5-8896734d0713","Type":"ContainerDied","Data":"1fd3890eb822343ee419a082a86d7c0f7f37da9e46f4c355fdf04fe11a7d6219"} Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.961111 4740 scope.go:117] "RemoveContainer" containerID="23c26fd38fe2c111190f23402f390777887a9905dc758d9f6ac51ca114039940" Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.965600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerStarted","Data":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} Feb 16 13:08:57 crc kubenswrapper[4740]: I0216 13:08:57.998912 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.003882 4740 scope.go:117] "RemoveContainer" containerID="c019ef5cfe36bc351706aadd4baf7b12f1204352eb6b5f824b90ceeaf17080aa" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.004444 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skhl7"] Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.025989 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mnb45" podStartSLOduration=3.861798757 podStartE2EDuration="11.025967969s" podCreationTimestamp="2026-02-16 13:08:47 +0000 UTC" firstStartedPulling="2026-02-16 13:08:49.732686829 +0000 UTC m=+957.109035550" lastFinishedPulling="2026-02-16 13:08:56.896856041 +0000 UTC m=+964.273204762" observedRunningTime="2026-02-16 13:08:58.019734163 +0000 UTC m=+965.396082904" watchObservedRunningTime="2026-02-16 13:08:58.025967969 +0000 UTC m=+965.402316690" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.042719 4740 scope.go:117] "RemoveContainer" containerID="34e7fc9ac737075feeb07ec3e7ed9c671f07626ad33bba4d05665def21010930" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.105791 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.106164 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.349803 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.378404 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") pod \"14c97501-5a5c-4e03-8e50-cf7422806c32\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.378562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") pod \"14c97501-5a5c-4e03-8e50-cf7422806c32\" (UID: \"14c97501-5a5c-4e03-8e50-cf7422806c32\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.379374 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14c97501-5a5c-4e03-8e50-cf7422806c32" (UID: "14c97501-5a5c-4e03-8e50-cf7422806c32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.389035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657" (OuterVolumeSpecName: "kube-api-access-j2657") pod "14c97501-5a5c-4e03-8e50-cf7422806c32" (UID: "14c97501-5a5c-4e03-8e50-cf7422806c32"). InnerVolumeSpecName "kube-api-access-j2657". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.481689 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2657\" (UniqueName: \"kubernetes.io/projected/14c97501-5a5c-4e03-8e50-cf7422806c32-kube-api-access-j2657\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.482090 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14c97501-5a5c-4e03-8e50-cf7422806c32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.742251 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.747627 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.772452 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") pod \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790604 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") pod \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\" (UID: \"bb88b05d-b7b7-4a08-847c-5e8d5cc98477\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790667 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") pod \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790721 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") pod \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\" (UID: \"4996abf8-6c4b-42d0-99f2-aeacf2fd5591\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") pod \"5b945754-b567-43e9-a84a-4e0ea95900e7\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.790910 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") pod \"5b945754-b567-43e9-a84a-4e0ea95900e7\" (UID: \"5b945754-b567-43e9-a84a-4e0ea95900e7\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791006 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb88b05d-b7b7-4a08-847c-5e8d5cc98477" (UID: "bb88b05d-b7b7-4a08-847c-5e8d5cc98477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791325 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.791444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4996abf8-6c4b-42d0-99f2-aeacf2fd5591" (UID: "4996abf8-6c4b-42d0-99f2-aeacf2fd5591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.792285 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b945754-b567-43e9-a84a-4e0ea95900e7" (UID: "5b945754-b567-43e9-a84a-4e0ea95900e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.796172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf" (OuterVolumeSpecName: "kube-api-access-9s2vf") pod "bb88b05d-b7b7-4a08-847c-5e8d5cc98477" (UID: "bb88b05d-b7b7-4a08-847c-5e8d5cc98477"). InnerVolumeSpecName "kube-api-access-9s2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.796247 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.801539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k" (OuterVolumeSpecName: "kube-api-access-x8d7k") pod "4996abf8-6c4b-42d0-99f2-aeacf2fd5591" (UID: "4996abf8-6c4b-42d0-99f2-aeacf2fd5591"). InnerVolumeSpecName "kube-api-access-x8d7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.816135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt" (OuterVolumeSpecName: "kube-api-access-ss7vt") pod "5b945754-b567-43e9-a84a-4e0ea95900e7" (UID: "5b945754-b567-43e9-a84a-4e0ea95900e7"). InnerVolumeSpecName "kube-api-access-ss7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.888633 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892237 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") pod \"b12e494a-5467-4264-a0e5-2596c61b4a73\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892362 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") pod \"b12e494a-5467-4264-a0e5-2596c61b4a73\" (UID: \"b12e494a-5467-4264-a0e5-2596c61b4a73\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892664 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12e494a-5467-4264-a0e5-2596c61b4a73" (UID: "b12e494a-5467-4264-a0e5-2596c61b4a73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892676 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7vt\" (UniqueName: \"kubernetes.io/projected/5b945754-b567-43e9-a84a-4e0ea95900e7-kube-api-access-ss7vt\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892788 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s2vf\" (UniqueName: \"kubernetes.io/projected/bb88b05d-b7b7-4a08-847c-5e8d5cc98477-kube-api-access-9s2vf\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892803 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892827 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8d7k\" (UniqueName: \"kubernetes.io/projected/4996abf8-6c4b-42d0-99f2-aeacf2fd5591-kube-api-access-x8d7k\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.892836 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b945754-b567-43e9-a84a-4e0ea95900e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.896925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r" (OuterVolumeSpecName: "kube-api-access-b5q7r") pod "b12e494a-5467-4264-a0e5-2596c61b4a73" (UID: "b12e494a-5467-4264-a0e5-2596c61b4a73"). InnerVolumeSpecName "kube-api-access-b5q7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9v664" event={"ID":"59544dcd-0bd1-4b5f-abf6-9ab972168af0","Type":"ContainerDied","Data":"56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973756 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9v664" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.973773 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cc37204670ebf20ed0172df79fdfa9d93ecfda998e37b4c23425462cf86f06" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cb8-account-create-update-dgv8s" event={"ID":"b12e494a-5467-4264-a0e5-2596c61b4a73","Type":"ContainerDied","Data":"f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975207 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e58d548b1b3ef23567819be0b9506bd3ead4ae7b114fd20dfbcf6bcf468a99" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.975244 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cb8-account-create-update-dgv8s" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.976908 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9mvdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.976911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9mvdt" event={"ID":"5b945754-b567-43e9-a84a-4e0ea95900e7","Type":"ContainerDied","Data":"47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.977076 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b3c8f37dcb32f3d527254dece6c0a4adcaf1fd786bf2bb59ffac5e66cd17db" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7989-account-create-update-s6gss" event={"ID":"14c97501-5a5c-4e03-8e50-cf7422806c32","Type":"ContainerDied","Data":"e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983098 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d34f1ac2e6b6d66de3086dfecd98f8e49cffbc92e012b3ac7b1ea262486ec1" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.983160 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7989-account-create-update-s6gss" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.993550 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") pod \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.993588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") pod \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\" (UID: \"59544dcd-0bd1-4b5f-abf6-9ab972168af0\") " Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994374 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxmdt" event={"ID":"4996abf8-6c4b-42d0-99f2-aeacf2fd5591","Type":"ContainerDied","Data":"94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994414 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a215b864e6eed7e3f2d37e6b50edeb4ad167f73f22c4d0e5cda0e3504635d6" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994450 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxmdt" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.994766 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59544dcd-0bd1-4b5f-abf6-9ab972168af0" (UID: "59544dcd-0bd1-4b5f-abf6-9ab972168af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996837 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59544dcd-0bd1-4b5f-abf6-9ab972168af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996859 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12e494a-5467-4264-a0e5-2596c61b4a73-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.996869 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5q7r\" (UniqueName: \"kubernetes.io/projected/b12e494a-5467-4264-a0e5-2596c61b4a73-kube-api-access-b5q7r\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.997178 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66" (OuterVolumeSpecName: "kube-api-access-t9c66") pod "59544dcd-0bd1-4b5f-abf6-9ab972168af0" (UID: "59544dcd-0bd1-4b5f-abf6-9ab972168af0"). InnerVolumeSpecName "kube-api-access-t9c66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998826 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b9b-account-create-update-njhb7" event={"ID":"bb88b05d-b7b7-4a08-847c-5e8d5cc98477","Type":"ContainerDied","Data":"ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9"} Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998868 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b9b-account-create-update-njhb7" Feb 16 13:08:58 crc kubenswrapper[4740]: I0216 13:08:58.998874 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0e406c973e4bb00f25e74fd93906ecc970c954b609dbdb218023b0dafa24d9" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.098520 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9c66\" (UniqueName: \"kubernetes.io/projected/59544dcd-0bd1-4b5f-abf6-9ab972168af0-kube-api-access-t9c66\") on node \"crc\" DevicePath \"\"" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.181627 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mnb45" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" probeResult="failure" output=< Feb 16 13:08:59 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:08:59 crc kubenswrapper[4740]: > Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.292234 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" path="/var/lib/kubelet/pods/92418e50-20f2-495c-9b06-963a5cd506d1/volumes" Feb 16 13:08:59 crc kubenswrapper[4740]: I0216 13:08:59.292768 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" path="/var/lib/kubelet/pods/aca31aa1-429e-4f65-acd5-8896734d0713/volumes" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.335362 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336295 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336312 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336332 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336339 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336350 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336359 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336371 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336377 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336388 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336395 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336402 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="init" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336408 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="init" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336418 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336424 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336434 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336443 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-utilities" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336454 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336471 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336479 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336489 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336509 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336523 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336529 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336546 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336553 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336565 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336570 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336581 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336587 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336597 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336603 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="extract-content" Feb 16 13:09:00 crc kubenswrapper[4740]: E0216 13:09:00.336614 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336620 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336789 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336801 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e068ce5-e7a1-430c-97f7-fed550912288" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336823 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde0147a-01d8-430b-a230-9d8bdfffeadd" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336835 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336843 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca31aa1-429e-4f65-acd5-8896734d0713" containerName="registry-server" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336852 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336861 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336872 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" containerName="mariadb-database-create" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336879 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" containerName="mariadb-account-create-update" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.336888 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="92418e50-20f2-495c-9b06-963a5cd506d1" containerName="dnsmasq-dns" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.337560 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.344579 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.362108 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.420544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.420592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.521905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.521953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.522614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.541004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"root-account-create-update-4x6h6\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:00 crc kubenswrapper[4740]: I0216 13:09:00.657710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:01 crc kubenswrapper[4740]: I0216 13:09:01.088394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023198 4740 generic.go:334] "Generic (PLEG): container finished" podID="36527dd8-2945-4976-894f-67360343ae7d" containerID="538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5" exitCode=0 Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023284 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerDied","Data":"538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5"} Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.023444 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerStarted","Data":"cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7"} Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.570726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.579517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8953d6de-24a5-4645-b270-2bbafe5b17c5-etc-swift\") pod \"swift-storage-0\" (UID: \"8953d6de-24a5-4645-b270-2bbafe5b17c5\") " pod="openstack/swift-storage-0" Feb 16 13:09:02 crc kubenswrapper[4740]: I0216 13:09:02.626618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.032444 4740 generic.go:334] "Generic (PLEG): container finished" podID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.032505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.036861 4740 generic.go:334] "Generic (PLEG): container finished" podID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerID="b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.036933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerDied","Data":"b4fa66e4440a06d8e1925285b9bd7f879f94b33aa14f971048dd803186ba2997"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.039083 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" exitCode=0 Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.039235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.211509 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.396554 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.486869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") pod \"36527dd8-2945-4976-894f-67360343ae7d\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.487437 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") pod \"36527dd8-2945-4976-894f-67360343ae7d\" (UID: \"36527dd8-2945-4976-894f-67360343ae7d\") " Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.488174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36527dd8-2945-4976-894f-67360343ae7d" (UID: "36527dd8-2945-4976-894f-67360343ae7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.492379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6" (OuterVolumeSpecName: "kube-api-access-4qjz6") pod "36527dd8-2945-4976-894f-67360343ae7d" (UID: "36527dd8-2945-4976-894f-67360343ae7d"). InnerVolumeSpecName "kube-api-access-4qjz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.588780 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36527dd8-2945-4976-894f-67360343ae7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.588829 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjz6\" (UniqueName: \"kubernetes.io/projected/36527dd8-2945-4976-894f-67360343ae7d-kube-api-access-4qjz6\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.669518 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:03 crc kubenswrapper[4740]: E0216 13:09:03.671328 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.671354 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.671581 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="36527dd8-2945-4976-894f-67360343ae7d" containerName="mariadb-account-create-update" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.672192 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.674361 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.675039 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nblft" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.692503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.704960 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794785 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.794995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.798800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.798930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.799746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.809111 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"glance-db-sync-7lg27\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:03 crc kubenswrapper[4740]: I0216 13:09:03.994579 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.056302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerStarted","Data":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.056700 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.059463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerStarted","Data":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.059852 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.062209 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"8ddaaa78282064690f9c617f5bb12f22b7b429edae49a32326aa15776d83be01"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x6h6" event={"ID":"36527dd8-2945-4976-894f-67360343ae7d","Type":"ContainerDied","Data":"cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7"} Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065504 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6e65dfd72d8fcacb6b28aa4b7c6b2a897d1c02cf62ea6e15e12e739f6bfbc7" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.065646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x6h6" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.104988 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.9875025 podStartE2EDuration="56.104972465s" podCreationTimestamp="2026-02-16 13:08:08 +0000 UTC" firstStartedPulling="2026-02-16 13:08:10.693876652 +0000 UTC m=+918.070225373" lastFinishedPulling="2026-02-16 13:08:28.811346617 +0000 UTC m=+936.187695338" observedRunningTime="2026-02-16 13:09:04.104603273 +0000 UTC m=+971.480952014" watchObservedRunningTime="2026-02-16 13:09:04.104972465 +0000 UTC m=+971.481321196" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.113377 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.74523276 podStartE2EDuration="55.113321398s" podCreationTimestamp="2026-02-16 13:08:09 +0000 UTC" firstStartedPulling="2026-02-16 13:08:18.420702959 +0000 UTC m=+925.797051680" lastFinishedPulling="2026-02-16 13:08:28.788791597 +0000 UTC m=+936.165140318" observedRunningTime="2026-02-16 13:09:04.081115714 +0000 UTC m=+971.457464445" watchObservedRunningTime="2026-02-16 13:09:04.113321398 +0000 UTC m=+971.489670119" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.426297 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.608900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609111 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609128 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609187 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609236 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.609266 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") pod \"8a769496-58ca-4540-9dc4-bd8df7e682fc\" (UID: \"8a769496-58ca-4540-9dc4-bd8df7e682fc\") " Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.610946 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.619324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.653129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn" (OuterVolumeSpecName: "kube-api-access-mp2kn") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "kube-api-access-mp2kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.662042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.683449 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts" (OuterVolumeSpecName: "scripts") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.706943 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.712032 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8a769496-58ca-4540-9dc4-bd8df7e682fc" (UID: "8a769496-58ca-4540-9dc4-bd8df7e682fc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713451 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713468 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp2kn\" (UniqueName: \"kubernetes.io/projected/8a769496-58ca-4540-9dc4-bd8df7e682fc-kube-api-access-mp2kn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713478 4740 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713486 4740 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a769496-58ca-4540-9dc4-bd8df7e682fc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713496 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713504 4740 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a769496-58ca-4540-9dc4-bd8df7e682fc-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:04 crc kubenswrapper[4740]: I0216 13:09:04.713511 4740 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a769496-58ca-4540-9dc4-bd8df7e682fc-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084287 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"ff664a449c345e69480ecdd2fda5b555b2b764f1bf8a046ae3b40f3be204b904"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084643 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"101dc671806cd9be004e923bcece9a145bc3add4f1371138a82d2c99d82ff056"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.084660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"b164356a1354c53ba3a298e2db8bec44018c009ecca15e734fa39e5378fa2c61"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4rgvg" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4rgvg" event={"ID":"8a769496-58ca-4540-9dc4-bd8df7e682fc","Type":"ContainerDied","Data":"d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9"} Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.087945 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d46868a8f2da38aed3e8f09b139b4f8fd740b0b6241a64157a497339a73a45a9" Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.092612 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:09:05 crc kubenswrapper[4740]: W0216 13:09:05.102564 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf092c8c4_9a32_4093_9a5c_bc5fd05d600e.slice/crio-b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199 WatchSource:0}: Error finding container b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199: Status 404 returned error can't find the container with id b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199 Feb 16 13:09:05 crc kubenswrapper[4740]: I0216 13:09:05.376284 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.105286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"1870062db4f02aa7ef673e04c92569d0bc2d9bfee4f609444d1c156208e1c246"} Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.109831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerStarted","Data":"b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199"} Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.528866 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:06 crc kubenswrapper[4740]: I0216 13:09:06.540014 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4x6h6"] Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124122 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"59d94ed00a856d6a01660445439abcfac2f74397a922657ef2bcf79a5827089f"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124178 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"d25968fa30d1a53225f2b0022ad6c65d520c6f475bc8a6a088d2c9e430ed25a4"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.124192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"bbac753e3b96f5ff361d427b75a7fb31e2141fe41ff62be2273363686456aa5c"} Feb 16 13:09:07 crc kubenswrapper[4740]: I0216 13:09:07.290635 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36527dd8-2945-4976-894f-67360343ae7d" path="/var/lib/kubelet/pods/36527dd8-2945-4976-894f-67360343ae7d/volumes" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.141263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"9eb7886d68d4fb7b23f87ebcde6313d42d4df7456bf82bc6673f9b37546b3e94"} Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.184047 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.254075 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:08 crc kubenswrapper[4740]: I0216 13:09:08.427194 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157333 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"89ee86072a2aa5aa769dbb3fc8af0dedae416012327b0da0f671dc29d24efdd4"} Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157615 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"b9e49c34b73f2b8894798822aec411ad86ebcdb3331908430d8272079117e6b5"} Feb 16 13:09:09 crc kubenswrapper[4740]: I0216 13:09:09.157633 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"f3f21ed5de91058db3263fc496a34bc22e2b441331fd66b54b4bc9cead990032"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.205247 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mnb45" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" containerID="cri-o://4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" gracePeriod=2 Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207696 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"8eac3dd791f97f1afd7163643c0e23e5b0a7f54c577caea8cb7f938a36e5839f"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"d360d681d9c438fcaa27acba9343951212b1e02544de7c1e1304e90ef2da75e9"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"52696d4a91bfa111df96105ddf56f0e0e399ae9a00207df032fcb4ac13e67d4e"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.207780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8953d6de-24a5-4645-b270-2bbafe5b17c5","Type":"ContainerStarted","Data":"36591053a2d7c2702b22c3072c1417e32b3dc91bddf0760490ffdaeb908b37f9"} Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.274671 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.130362359 podStartE2EDuration="25.274649433s" podCreationTimestamp="2026-02-16 13:08:45 +0000 UTC" firstStartedPulling="2026-02-16 13:09:03.232558502 +0000 UTC m=+970.608907223" lastFinishedPulling="2026-02-16 13:09:08.376845566 +0000 UTC m=+975.753194297" observedRunningTime="2026-02-16 13:09:10.26629469 +0000 UTC m=+977.642643431" watchObservedRunningTime="2026-02-16 13:09:10.274649433 +0000 UTC m=+977.650998154" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.608427 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:10 crc kubenswrapper[4740]: E0216 13:09:10.608868 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.608886 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.609094 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a769496-58ca-4540-9dc4-bd8df7e682fc" containerName="swift-ring-rebalance" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.610056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.623150 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.627794 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736866 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.736939 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.812467 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839115 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.839288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.840266 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.840381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.842386 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.842715 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.843081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.887916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"dnsmasq-dns-6d5b6d6b67-gghds\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.936019 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.940928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.941001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.941124 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") pod \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\" (UID: \"9e55b787-ebf2-405e-b1ef-545e0afe08b7\") " Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.942262 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities" (OuterVolumeSpecName: "utilities") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.950052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm" (OuterVolumeSpecName: "kube-api-access-l4zjm") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "kube-api-access-l4zjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:10 crc kubenswrapper[4740]: I0216 13:09:10.970567 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e55b787-ebf2-405e-b1ef-545e0afe08b7" (UID: "9e55b787-ebf2-405e-b1ef-545e0afe08b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043008 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4zjm\" (UniqueName: \"kubernetes.io/projected/9e55b787-ebf2-405e-b1ef-545e0afe08b7-kube-api-access-l4zjm\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043043 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.043052 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e55b787-ebf2-405e-b1ef-545e0afe08b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216533 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" exitCode=0 Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216629 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnb45" event={"ID":"9e55b787-ebf2-405e-b1ef-545e0afe08b7","Type":"ContainerDied","Data":"674fc74e32c41001843c95fbf70084ec6f8d9862901843c0b561d5e3afe04969"} Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216748 4740 scope.go:117] "RemoveContainer" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.216645 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnb45" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.256033 4740 scope.go:117] "RemoveContainer" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.258520 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.265672 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnb45"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.277217 4740 scope.go:117] "RemoveContainer" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.294207 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" path="/var/lib/kubelet/pods/9e55b787-ebf2-405e-b1ef-545e0afe08b7/volumes" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.295121 4740 scope.go:117] "RemoveContainer" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.296118 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": container with ID starting with 4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a not found: ID does not exist" containerID="4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296164 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a"} err="failed to get container status \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": rpc error: code = NotFound desc = could not find container \"4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a\": container with ID starting with 4aca4ae7c63cf1b8b24eeb4d210feba306457b475da9fb573269169b05c71d8a not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296189 4740 scope.go:117] "RemoveContainer" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.296687 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": container with ID starting with 5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c not found: ID does not exist" containerID="5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296706 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c"} err="failed to get container status \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": rpc error: code = NotFound desc = could not find container \"5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c\": container with ID starting with 5fc4c09d4ea176d2f986485d2e1c5d3c9e0c1e8d7b46e431abda5b9fd1f8a07c not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.296736 4740 scope.go:117] "RemoveContainer" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.297443 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": container with ID starting with 31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081 not found: ID does not exist" containerID="31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.297464 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081"} err="failed to get container status \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": rpc error: code = NotFound desc = could not find container \"31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081\": container with ID starting with 31c4cc8f30207eb4a1cb14104656748edff64bc09f1dc4ea864ec02070fa0081 not found: ID does not exist" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.405579 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:11 crc kubenswrapper[4740]: W0216 13:09:11.420361 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56781f2b_b49d_4234_981b_a01a10dfab05.slice/crio-07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf WatchSource:0}: Error finding container 07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf: Status 404 returned error can't find the container with id 07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.547642 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-utilities" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-utilities" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548101 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548110 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: E0216 13:09:11.548128 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-content" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548135 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="extract-content" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.548337 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e55b787-ebf2-405e-b1ef-545e0afe08b7" containerName="registry-server" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.549068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.561205 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.570031 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.657742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.657876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.758882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.759007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.759772 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.785915 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"root-account-create-update-gqbdm\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:11 crc kubenswrapper[4740]: I0216 13:09:11.904372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.232647 4740 generic.go:334] "Generic (PLEG): container finished" podID="56781f2b-b49d-4234-981b-a01a10dfab05" containerID="96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91" exitCode=0 Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.232985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91"} Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.233007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerStarted","Data":"07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf"} Feb 16 13:09:12 crc kubenswrapper[4740]: I0216 13:09:12.364708 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.862827 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qnt79" podUID="04335a5d-7cac-4a47-982c-70cae9db69ff" containerName="ovn-controller" probeResult="failure" output=< Feb 16 13:09:13 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 13:09:13 crc kubenswrapper[4740]: > Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.895788 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:09:13 crc kubenswrapper[4740]: I0216 13:09:13.912571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-crblj" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.145663 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.146965 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.149837 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.176152 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304166 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304193 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304227 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.304328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.406347 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.408070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.409001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.411963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.412059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.412106 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.428367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"ovn-controller-qnt79-config-5jvnm\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:14 crc kubenswrapper[4740]: I0216 13:09:14.469072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:18 crc kubenswrapper[4740]: I0216 13:09:18.882236 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qnt79" podUID="04335a5d-7cac-4a47-982c-70cae9db69ff" containerName="ovn-controller" probeResult="failure" output=< Feb 16 13:09:18 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 13:09:18 crc kubenswrapper[4740]: > Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.097030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.470101 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.528740 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.530133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.539318 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.540215 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.541422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.553336 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.589241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635449 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.635503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.737922 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.739046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.771981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"cinder-f6f4-account-create-update-l7nbq\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.775405 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"cinder-db-create-plzhg\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.802011 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.803418 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.808661 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.810095 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.811850 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.816752 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.825957 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.872610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.883800 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.893490 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.894664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.897401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.901428 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.901650 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.910123 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.911479 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.912565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.927975 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939643 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939693 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939728 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.939790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:20 crc kubenswrapper[4740]: I0216 13:09:20.948416 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.008596 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.009582 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.011980 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.028919 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.040960 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041138 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041158 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041194 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.041265 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.042366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.042796 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.059053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"barbican-858d-account-create-update-xr2fs\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.090479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"barbican-db-create-j27bj\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.142951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143421 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.143452 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.144125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.146333 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.146611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.153438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.159300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"neutron-db-create-vf54h\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.162249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.166643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"keystone-db-sync-qvqg7\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.218192 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.239516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.245288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.245332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.246192 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.261421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"neutron-2e1c-account-create-update-htmg9\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:21 crc kubenswrapper[4740]: I0216 13:09:21.325711 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:22 crc kubenswrapper[4740]: W0216 13:09:22.080077 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15147587_626f_4577_b5af_b8f574f60152.slice/crio-49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e WatchSource:0}: Error finding container 49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e: Status 404 returned error can't find the container with id 49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.207288 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.209134 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkb5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-7lg27_openstack(f092c8c4-9a32-4093-9a5c-bc5fd05d600e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.210472 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-7lg27" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" Feb 16 13:09:22 crc kubenswrapper[4740]: I0216 13:09:22.390333 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerStarted","Data":"49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e"} Feb 16 13:09:22 crc kubenswrapper[4740]: E0216 13:09:22.479233 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-7lg27" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" Feb 16 13:09:22 crc kubenswrapper[4740]: I0216 13:09:22.796885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:22 crc kubenswrapper[4740]: W0216 13:09:22.857098 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052d2ebf_cf79_4395_b125_d955d8144cef.slice/crio-785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222 WatchSource:0}: Error finding container 785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222: Status 404 returned error can't find the container with id 785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.018174 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.188731 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.211391 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.227556 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:09:23 crc kubenswrapper[4740]: W0216 13:09:23.262038 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5296850e_63c0_4801_bff8_bc5213555f58.slice/crio-1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956 WatchSource:0}: Error finding container 1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956: Status 404 returned error can't find the container with id 1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.408758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerStarted","Data":"29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.411753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerStarted","Data":"2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.411794 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerStarted","Data":"959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.422120 4740 generic.go:334] "Generic (PLEG): container finished" podID="15147587-626f-4577-b5af-b8f574f60152" containerID="cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94" exitCode=0 Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.422177 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerDied","Data":"cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.425292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerStarted","Data":"5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013"} Feb 16 13:09:23 crc kubenswrapper[4740]: W0216 13:09:23.430032 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634925bb_5381_4298_a256_447ef56a2f2a.slice/crio-00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed WatchSource:0}: Error finding container 00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed: Status 404 returned error can't find the container with id 00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.430232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerStarted","Data":"1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.444678 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerStarted","Data":"9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.446010 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.446046 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.456233 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerStarted","Data":"8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.456286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerStarted","Data":"785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222"} Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.469580 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.483697 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f6f4-account-create-update-l7nbq" podStartSLOduration=3.483679206 podStartE2EDuration="3.483679206s" podCreationTimestamp="2026-02-16 13:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.435544591 +0000 UTC m=+990.811893332" watchObservedRunningTime="2026-02-16 13:09:23.483679206 +0000 UTC m=+990.860027927" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.502565 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.518128 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podStartSLOduration=13.518105189 podStartE2EDuration="13.518105189s" podCreationTimestamp="2026-02-16 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.497713607 +0000 UTC m=+990.874062328" watchObservedRunningTime="2026-02-16 13:09:23.518105189 +0000 UTC m=+990.894453910" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.531214 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qnt79-config-5jvnm" podStartSLOduration=9.531197041 podStartE2EDuration="9.531197041s" podCreationTimestamp="2026-02-16 13:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:23.528357292 +0000 UTC m=+990.904706023" watchObservedRunningTime="2026-02-16 13:09:23.531197041 +0000 UTC m=+990.907545762" Feb 16 13:09:23 crc kubenswrapper[4740]: I0216 13:09:23.892991 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qnt79" Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.487011 4740 generic.go:334] "Generic (PLEG): container finished" podID="052d2ebf-cf79-4395-b125-d955d8144cef" containerID="8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.487351 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerDied","Data":"8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.489536 4740 generic.go:334] "Generic (PLEG): container finished" podID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerID="f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.489602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerDied","Data":"f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.494682 4740 generic.go:334] "Generic (PLEG): container finished" podID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerID="2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.494751 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerDied","Data":"2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.497461 4740 generic.go:334] "Generic (PLEG): container finished" podID="5296850e-63c0-4801-bff8-bc5213555f58" containerID="297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.497901 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerDied","Data":"297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506348 4740 generic.go:334] "Generic (PLEG): container finished" podID="634925bb-5381-4298-a256-447ef56a2f2a" containerID="a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506444 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerDied","Data":"a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.506494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerStarted","Data":"00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.512743 4740 generic.go:334] "Generic (PLEG): container finished" podID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerID="c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.513000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerDied","Data":"c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.513135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerStarted","Data":"a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517469 4740 generic.go:334] "Generic (PLEG): container finished" podID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerID="f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7" exitCode=0 Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerDied","Data":"f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.517788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerStarted","Data":"6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d"} Feb 16 13:09:24 crc kubenswrapper[4740]: I0216 13:09:24.906942 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.056468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") pod \"15147587-626f-4577-b5af-b8f574f60152\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.056916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") pod \"15147587-626f-4577-b5af-b8f574f60152\" (UID: \"15147587-626f-4577-b5af-b8f574f60152\") " Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.057731 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15147587-626f-4577-b5af-b8f574f60152" (UID: "15147587-626f-4577-b5af-b8f574f60152"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.063446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m" (OuterVolumeSpecName: "kube-api-access-97x9m") pod "15147587-626f-4577-b5af-b8f574f60152" (UID: "15147587-626f-4577-b5af-b8f574f60152"). InnerVolumeSpecName "kube-api-access-97x9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.158417 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97x9m\" (UniqueName: \"kubernetes.io/projected/15147587-626f-4577-b5af-b8f574f60152-kube-api-access-97x9m\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.158452 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15147587-626f-4577-b5af-b8f574f60152-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.543439 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gqbdm" Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.544197 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gqbdm" event={"ID":"15147587-626f-4577-b5af-b8f574f60152","Type":"ContainerDied","Data":"49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e"} Feb 16 13:09:25 crc kubenswrapper[4740]: I0216 13:09:25.544247 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b2e40538503b5aca792037ec85ac41f2bf8a79d20999166579ecb99524706e" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.542113 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.550098 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.587040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-j27bj" event={"ID":"65301f64-cd42-4faf-b454-a43c7c7096a1","Type":"ContainerDied","Data":"6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.587387 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2d9667515562668bded18ba8d455174b07ebe7fa37c1dddd5c4e21808fe36d" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.588501 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.593843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-5jvnm" event={"ID":"052d2ebf-cf79-4395-b125-d955d8144cef","Type":"ContainerDied","Data":"785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.593892 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785feaff7cc09b2e5924ca6a076330406061d224a176efd0f2aa5bfc67f20222" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.596936 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2e1c-account-create-update-htmg9" event={"ID":"9aafb0ee-2681-48a9-b1e0-2442d0a16541","Type":"ContainerDied","Data":"29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.596973 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cb9275df2ec6cccc97f91765fd05285ef7a3fb2cdc73b6751e434f88ea8a45" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.597334 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.598540 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f6f4-account-create-update-l7nbq" event={"ID":"685d1543-1ab9-435f-b2c0-2a54c104e86f","Type":"ContainerDied","Data":"959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.598564 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959763bb161fc3b22a27e1b1f632d4220099b91dd322483a10b3fcd00233dc6e" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-plzhg" event={"ID":"5296850e-63c0-4801-bff8-bc5213555f58","Type":"ContainerDied","Data":"1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600721 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f60b93640e31fa23e121818c7c4eaf1b7967f35ab3e57f596075d58b036b956" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.600770 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-plzhg" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.601034 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602165 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-858d-account-create-update-xr2fs" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-858d-account-create-update-xr2fs" event={"ID":"634925bb-5381-4298-a256-447ef56a2f2a","Type":"ContainerDied","Data":"00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.602200 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e9d94952934d384ef7f137a927741de406b96cb418edea110bcd43d989d4ed" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.607985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vf54h" event={"ID":"a14f3fd5-4d53-4336-85b1-7d636060bd0a","Type":"ContainerDied","Data":"a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67"} Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.608016 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a832dcf84b0f8322b7d99008949d66452327821e1078edfcb734f8844ca6ea67" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.608065 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vf54h" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.614198 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.622062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") pod \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626895 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") pod \"5296850e-63c0-4801-bff8-bc5213555f58\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626921 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") pod \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\" (UID: \"a14f3fd5-4d53-4336-85b1-7d636060bd0a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.626987 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") pod \"5296850e-63c0-4801-bff8-bc5213555f58\" (UID: \"5296850e-63c0-4801-bff8-bc5213555f58\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.627697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5296850e-63c0-4801-bff8-bc5213555f58" (UID: "5296850e-63c0-4801-bff8-bc5213555f58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.627774 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a14f3fd5-4d53-4336-85b1-7d636060bd0a" (UID: "a14f3fd5-4d53-4336-85b1-7d636060bd0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.633409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8" (OuterVolumeSpecName: "kube-api-access-fkjs8") pod "a14f3fd5-4d53-4336-85b1-7d636060bd0a" (UID: "a14f3fd5-4d53-4336-85b1-7d636060bd0a"). InnerVolumeSpecName "kube-api-access-fkjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.633569 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7" (OuterVolumeSpecName: "kube-api-access-dfcn7") pod "5296850e-63c0-4801-bff8-bc5213555f58" (UID: "5296850e-63c0-4801-bff8-bc5213555f58"). InnerVolumeSpecName "kube-api-access-dfcn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728672 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") pod \"685d1543-1ab9-435f-b2c0-2a54c104e86f\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") pod \"65301f64-cd42-4faf-b454-a43c7c7096a1\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728867 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") pod \"685d1543-1ab9-435f-b2c0-2a54c104e86f\" (UID: \"685d1543-1ab9-435f-b2c0-2a54c104e86f\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") pod \"634925bb-5381-4298-a256-447ef56a2f2a\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728969 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") pod \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.728995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729016 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "685d1543-1ab9-435f-b2c0-2a54c104e86f" (UID: "685d1543-1ab9-435f-b2c0-2a54c104e86f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729383 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729510 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") pod \"65301f64-cd42-4faf-b454-a43c7c7096a1\" (UID: \"65301f64-cd42-4faf-b454-a43c7c7096a1\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") pod \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\" (UID: \"9aafb0ee-2681-48a9-b1e0-2442d0a16541\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729643 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run" (OuterVolumeSpecName: "var-run") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") pod \"634925bb-5381-4298-a256-447ef56a2f2a\" (UID: \"634925bb-5381-4298-a256-447ef56a2f2a\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729799 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.729897 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") pod \"052d2ebf-cf79-4395-b125-d955d8144cef\" (UID: \"052d2ebf-cf79-4395-b125-d955d8144cef\") " Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730072 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730138 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65301f64-cd42-4faf-b454-a43c7c7096a1" (UID: "65301f64-cd42-4faf-b454-a43c7c7096a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730197 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "634925bb-5381-4298-a256-447ef56a2f2a" (UID: "634925bb-5381-4298-a256-447ef56a2f2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9aafb0ee-2681-48a9-b1e0-2442d0a16541" (UID: "9aafb0ee-2681-48a9-b1e0-2442d0a16541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730568 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts" (OuterVolumeSpecName: "scripts") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730648 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730672 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730684 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65301f64-cd42-4faf-b454-a43c7c7096a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730696 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730708 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9aafb0ee-2681-48a9-b1e0-2442d0a16541-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730720 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/634925bb-5381-4298-a256-447ef56a2f2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730731 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a14f3fd5-4d53-4336-85b1-7d636060bd0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730741 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/052d2ebf-cf79-4395-b125-d955d8144cef-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730754 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcn7\" (UniqueName: \"kubernetes.io/projected/5296850e-63c0-4801-bff8-bc5213555f58-kube-api-access-dfcn7\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730768 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685d1543-1ab9-435f-b2c0-2a54c104e86f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730779 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkjs8\" (UniqueName: \"kubernetes.io/projected/a14f3fd5-4d53-4336-85b1-7d636060bd0a-kube-api-access-fkjs8\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.730790 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5296850e-63c0-4801-bff8-bc5213555f58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732253 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5" (OuterVolumeSpecName: "kube-api-access-zwvk5") pod "65301f64-cd42-4faf-b454-a43c7c7096a1" (UID: "65301f64-cd42-4faf-b454-a43c7c7096a1"). InnerVolumeSpecName "kube-api-access-zwvk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732303 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c" (OuterVolumeSpecName: "kube-api-access-vxk6c") pod "052d2ebf-cf79-4395-b125-d955d8144cef" (UID: "052d2ebf-cf79-4395-b125-d955d8144cef"). InnerVolumeSpecName "kube-api-access-vxk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.732963 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6" (OuterVolumeSpecName: "kube-api-access-pc4k6") pod "9aafb0ee-2681-48a9-b1e0-2442d0a16541" (UID: "9aafb0ee-2681-48a9-b1e0-2442d0a16541"). InnerVolumeSpecName "kube-api-access-pc4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.733370 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns" (OuterVolumeSpecName: "kube-api-access-bgjns") pod "634925bb-5381-4298-a256-447ef56a2f2a" (UID: "634925bb-5381-4298-a256-447ef56a2f2a"). InnerVolumeSpecName "kube-api-access-bgjns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.734717 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf" (OuterVolumeSpecName: "kube-api-access-m4wnf") pod "685d1543-1ab9-435f-b2c0-2a54c104e86f" (UID: "685d1543-1ab9-435f-b2c0-2a54c104e86f"). InnerVolumeSpecName "kube-api-access-m4wnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834191 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwvk5\" (UniqueName: \"kubernetes.io/projected/65301f64-cd42-4faf-b454-a43c7c7096a1-kube-api-access-zwvk5\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834232 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wnf\" (UniqueName: \"kubernetes.io/projected/685d1543-1ab9-435f-b2c0-2a54c104e86f-kube-api-access-m4wnf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834245 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjns\" (UniqueName: \"kubernetes.io/projected/634925bb-5381-4298-a256-447ef56a2f2a-kube-api-access-bgjns\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834257 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc4k6\" (UniqueName: \"kubernetes.io/projected/9aafb0ee-2681-48a9-b1e0-2442d0a16541-kube-api-access-pc4k6\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834268 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxk6c\" (UniqueName: \"kubernetes.io/projected/052d2ebf-cf79-4395-b125-d955d8144cef-kube-api-access-vxk6c\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:28 crc kubenswrapper[4740]: I0216 13:09:28.834284 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d2ebf-cf79-4395-b125-d955d8144cef-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618371 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-5jvnm" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerStarted","Data":"032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6"} Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618460 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2e1c-account-create-update-htmg9" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618373 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-j27bj" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.618564 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f6f4-account-create-update-l7nbq" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.654600 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qvqg7" podStartSLOduration=4.461117553 podStartE2EDuration="9.654579493s" podCreationTimestamp="2026-02-16 13:09:20 +0000 UTC" firstStartedPulling="2026-02-16 13:09:23.209259811 +0000 UTC m=+990.585608532" lastFinishedPulling="2026-02-16 13:09:28.402721751 +0000 UTC m=+995.779070472" observedRunningTime="2026-02-16 13:09:29.642633657 +0000 UTC m=+997.018982398" watchObservedRunningTime="2026-02-16 13:09:29.654579493 +0000 UTC m=+997.030928214" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.798954 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.827770 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qnt79-config-5jvnm"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.869841 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870453 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870564 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870745 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.870910 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.870995 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871083 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871153 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871231 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871306 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871366 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871526 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871593 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: E0216 13:09:29.871661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871723 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.871950 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="634925bb-5381-4298-a256-447ef56a2f2a" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872037 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872105 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5296850e-63c0-4801-bff8-bc5213555f58" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872170 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872225 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" containerName="mariadb-database-create" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872357 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="15147587-626f-4577-b5af-b8f574f60152" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872414 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" containerName="mariadb-account-create-update" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.872469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" containerName="ovn-config" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.877162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.877261 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.879264 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:29 crc kubenswrapper[4740]: I0216 13:09:29.951666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.053991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.054026 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.054728 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.056129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.079324 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"ovn-controller-qnt79-config-xjr57\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.195978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.663418 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:30 crc kubenswrapper[4740]: W0216 13:09:30.670570 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc23d24c8_fce8_4b94_8d3a_44fe83eae896.slice/crio-a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e WatchSource:0}: Error finding container a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e: Status 404 returned error can't find the container with id a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.937800 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.997599 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:30 crc kubenswrapper[4740]: I0216 13:09:30.997842 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" containerID="cri-o://1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" gracePeriod=10 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.087708 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.291530 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d2ebf-cf79-4395-b125-d955d8144cef" path="/var/lib/kubelet/pods/052d2ebf-cf79-4395-b125-d955d8144cef/volumes" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.513243 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.634963 4740 generic.go:334] "Generic (PLEG): container finished" podID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" exitCode=0 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" event={"ID":"b414a4c4-7799-4c49-9aa9-5718c2e5855f","Type":"ContainerDied","Data":"80f534c4bd81393c75dfb37a96ed92ed9eb1b28b8bfbf33de3f706f2e7523c70"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635058 4740 scope.go:117] "RemoveContainer" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.635151 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-g5hv2" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.638941 4740 generic.go:334] "Generic (PLEG): container finished" podID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerID="4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0" exitCode=0 Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.638978 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerDied","Data":"4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.639000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerStarted","Data":"a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e"} Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.666948 4740 scope.go:117] "RemoveContainer" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677132 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677181 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677223 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.677322 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") pod \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\" (UID: \"b414a4c4-7799-4c49-9aa9-5718c2e5855f\") " Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.683933 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45" (OuterVolumeSpecName: "kube-api-access-vsv45") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "kube-api-access-vsv45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690186 4740 scope.go:117] "RemoveContainer" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: E0216 13:09:31.690716 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": container with ID starting with 1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64 not found: ID does not exist" containerID="1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690759 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64"} err="failed to get container status \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": rpc error: code = NotFound desc = could not find container \"1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64\": container with ID starting with 1fad5e228bcb307a3a10611e1960e22329b2bd516744cdfe0c65846ea5621f64 not found: ID does not exist" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.690783 4740 scope.go:117] "RemoveContainer" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: E0216 13:09:31.691489 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": container with ID starting with 6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692 not found: ID does not exist" containerID="6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.691517 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692"} err="failed to get container status \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": rpc error: code = NotFound desc = could not find container \"6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692\": container with ID starting with 6656c7a7c14009119c325211b1f2b2d23b0d1a6c0d6d6895f1e3c9f241aa8692 not found: ID does not exist" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.719861 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.721614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.725187 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config" (OuterVolumeSpecName: "config") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.730631 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b414a4c4-7799-4c49-9aa9-5718c2e5855f" (UID: "b414a4c4-7799-4c49-9aa9-5718c2e5855f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779326 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779369 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779381 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779395 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsv45\" (UniqueName: \"kubernetes.io/projected/b414a4c4-7799-4c49-9aa9-5718c2e5855f-kube-api-access-vsv45\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.779407 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414a4c4-7799-4c49-9aa9-5718c2e5855f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.982340 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:31 crc kubenswrapper[4740]: I0216 13:09:31.987517 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-g5hv2"] Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.647878 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerID="032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6" exitCode=0 Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.647950 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerDied","Data":"032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6"} Feb 16 13:09:32 crc kubenswrapper[4740]: I0216 13:09:32.976763 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.101955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102012 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102039 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run" (OuterVolumeSpecName: "var-run") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102211 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") pod \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\" (UID: \"c23d24c8-fce8-4b94-8d3a-44fe83eae896\") " Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102733 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102748 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.102758 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c23d24c8-fce8-4b94-8d3a-44fe83eae896-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.103071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.103596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts" (OuterVolumeSpecName: "scripts") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.107853 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq" (OuterVolumeSpecName: "kube-api-access-ljwxq") pod "c23d24c8-fce8-4b94-8d3a-44fe83eae896" (UID: "c23d24c8-fce8-4b94-8d3a-44fe83eae896"). InnerVolumeSpecName "kube-api-access-ljwxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205222 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205276 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d24c8-fce8-4b94-8d3a-44fe83eae896-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.205300 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwxq\" (UniqueName: \"kubernetes.io/projected/c23d24c8-fce8-4b94-8d3a-44fe83eae896-kube-api-access-ljwxq\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.310103 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" path="/var/lib/kubelet/pods/b414a4c4-7799-4c49-9aa9-5718c2e5855f/volumes" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659824 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qnt79-config-xjr57" event={"ID":"c23d24c8-fce8-4b94-8d3a-44fe83eae896","Type":"ContainerDied","Data":"a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e"} Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659860 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qnt79-config-xjr57" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.659876 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2da5054d9bb13bf2226a5bae8103b52678d3aae57acb04120341957bcca479e" Feb 16 13:09:33 crc kubenswrapper[4740]: I0216 13:09:33.974253 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.068101 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.075428 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qnt79-config-xjr57"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117388 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.117491 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") pod \"3cea0875-b3a8-4a52-84ff-d9215408294b\" (UID: \"3cea0875-b3a8-4a52-84ff-d9215408294b\") " Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.127000 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg" (OuterVolumeSpecName: "kube-api-access-4gstg") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "kube-api-access-4gstg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.137970 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.167137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data" (OuterVolumeSpecName: "config-data") pod "3cea0875-b3a8-4a52-84ff-d9215408294b" (UID: "3cea0875-b3a8-4a52-84ff-d9215408294b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219234 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219461 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cea0875-b3a8-4a52-84ff-d9215408294b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.219573 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gstg\" (UniqueName: \"kubernetes.io/projected/3cea0875-b3a8-4a52-84ff-d9215408294b-kube-api-access-4gstg\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670269 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qvqg7" event={"ID":"3cea0875-b3a8-4a52-84ff-d9215408294b","Type":"ContainerDied","Data":"5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013"} Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670307 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0b7120fe35634905135cb504138e76441745ac6645a1eac08b8b566d8c1013" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.670356 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qvqg7" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919005 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919702 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="init" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919726 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="init" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919758 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919766 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919779 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919789 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: E0216 13:09:34.919805 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.919834 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922749 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b414a4c4-7799-4c49-9aa9-5718c2e5855f" containerName="dnsmasq-dns" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922781 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" containerName="keystone-db-sync" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.922793 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" containerName="ovn-config" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.923597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.936847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.981057 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.983256 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987178 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987363 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.987529 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.989556 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.989688 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:34 crc kubenswrapper[4740]: I0216 13:09:34.996944 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.029927 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.029993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030042 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.030149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.082723 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.086297 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.090744 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.090951 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.091129 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.091307 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-n5hsv" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.102160 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131777 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131802 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.131938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.132071 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.133209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.133912 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.134585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.144144 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.145563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.162882 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"dnsmasq-dns-6f8c45789f-8pfh4\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.174241 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.176438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.180804 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.182354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.203988 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233258 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233360 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233712 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233870 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.233993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234143 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.234172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.239217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.244571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.245433 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.248363 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.250295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.257530 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.264354 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"keystone-bootstrap-l5jrj\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.276001 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.277758 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284265 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hg2gh" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284448 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.284561 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.344880 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345327 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345491 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345613 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345779 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.345900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.346318 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.353684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.354139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.355995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.370358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.373352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.373618 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.376440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.380154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.398259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.422023 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.422851 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23d24c8-fce8-4b94-8d3a-44fe83eae896" path="/var/lib/kubelet/pods/c23d24c8-fce8-4b94-8d3a-44fe83eae896/volumes" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.423228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.425766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"horizon-ffb796745-6csq7\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.426989 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"ceilometer-0\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.427672 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.432405 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.435093 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.436590 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.442607 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.443216 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77m5" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.452789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.454710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.454876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.456884 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.469516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.470127 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.489340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.490496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"neutron-db-sync-hclws\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.496738 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.524938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.539233 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.540289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mtx8t" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544897 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.544998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556736 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556828 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.556976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.578832 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.599978 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.601379 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610062 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610229 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.610376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjvkq" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.621987 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.630667 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.654360 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.656517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.658933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.658984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659030 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659650 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659791 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659845 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.659952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.662562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.663239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.665348 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.681241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.683657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.684388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.688363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.706471 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"barbican-db-sync-dlcqm\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.709335 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.710071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"horizon-dfc4b7997-nx6ww\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.765056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766939 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.766990 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767022 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767050 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767075 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767113 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767317 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.767348 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.776950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.778636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.778692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.781107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.782691 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.789499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.790188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.791648 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.796769 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.802195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"cinder-db-sync-2hxgr\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.802768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"placement-db-sync-d9rnm\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.824024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.841249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.871975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.872224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.873919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874308 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.874983 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.880757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.899489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"dnsmasq-dns-fcfdd6f9f-8v87v\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.966290 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:09:35 crc kubenswrapper[4740]: I0216 13:09:35.984615 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.054268 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.190088 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.205162 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93a2e5d2_cd7f_48d9_87ff_89475b5eeee0.slice/crio-c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259 WatchSource:0}: Error finding container c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259: Status 404 returned error can't find the container with id c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.352454 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.356392 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23974e9_800c_4295_8f84_89b4052280cd.slice/crio-b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb WatchSource:0}: Error finding container b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb: Status 404 returned error can't find the container with id b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.362848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.364018 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d3f831f_b6c1_4f65_85cb_e5ce8ffc3f93.slice/crio-4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7 WatchSource:0}: Error finding container 4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7: Status 404 returned error can't find the container with id 4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.548363 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.555518 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:36 crc kubenswrapper[4740]: W0216 13:09:36.582526 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c41d146_de9f_4d90_bb9e_6c12fc832650.slice/crio-1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d WatchSource:0}: Error finding container 1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d: Status 404 returned error can't find the container with id 1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.584681 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708580 4740 generic.go:334] "Generic (PLEG): container finished" podID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerID="872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325" exitCode=0 Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708688 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerDied","Data":"872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.708721 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerStarted","Data":"01c87342c96bf6d54c3f1ad0c1b57760034f7e0851745284f9ad2ed4dcf6b3ad"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.711743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"ebb9cfe67450ca7ef04f47283c5bb6f822165573117c5716dd75a543c9096769"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.713451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerStarted","Data":"b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.714646 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.715734 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffb796745-6csq7" event={"ID":"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93","Type":"ContainerStarted","Data":"4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.716702 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerStarted","Data":"1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.735754 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerStarted","Data":"b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.735792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerStarted","Data":"c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259"} Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.758970 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l5jrj" podStartSLOduration=2.758947352 podStartE2EDuration="2.758947352s" podCreationTimestamp="2026-02-16 13:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:36.752513189 +0000 UTC m=+1004.128861910" watchObservedRunningTime="2026-02-16 13:09:36.758947352 +0000 UTC m=+1004.135296073" Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.844275 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.863880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:09:36 crc kubenswrapper[4740]: I0216 13:09:36.890345 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.106712 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196027 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196387 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196458 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.196638 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") pod \"a4ce9a30-45a5-40c6-a259-00a790928e07\" (UID: \"a4ce9a30-45a5-40c6-a259-00a790928e07\") " Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.209098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2" (OuterVolumeSpecName: "kube-api-access-xvrs2") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "kube-api-access-xvrs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.227605 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.247529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config" (OuterVolumeSpecName: "config") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.252850 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.269433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.284841 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4ce9a30-45a5-40c6-a259-00a790928e07" (UID: "a4ce9a30-45a5-40c6-a259-00a790928e07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302006 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302035 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302044 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302052 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302061 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvrs2\" (UniqueName: \"kubernetes.io/projected/a4ce9a30-45a5-40c6-a259-00a790928e07-kube-api-access-xvrs2\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.302070 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4ce9a30-45a5-40c6-a259-00a790928e07-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.397339 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.471943 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.485313 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:37 crc kubenswrapper[4740]: E0216 13:09:37.485767 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.485788 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.486055 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" containerName="init" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.487106 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.511834 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613052 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613121 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613140 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613240 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.613727 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.614079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.614938 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.618275 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.629072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"horizon-58778bbcc-2dwkc\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.762700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerStarted","Data":"09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.767791 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerStarted","Data":"9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772594 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerID="b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f" exitCode=0 Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.772693 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerStarted","Data":"c97c29b3334866f32db650fc18c5be9637e2474fd2a6df30fb7505a093dc5ffb"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.783762 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" event={"ID":"a4ce9a30-45a5-40c6-a259-00a790928e07","Type":"ContainerDied","Data":"01c87342c96bf6d54c3f1ad0c1b57760034f7e0851745284f9ad2ed4dcf6b3ad"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.783774 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8pfh4" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.784113 4740 scope.go:117] "RemoveContainer" containerID="872e1e092f6c8012cfe8884aa92b7bfd5c3ecdc7b66acc3a62c144ec783f7325" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.817448 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerStarted","Data":"afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf"} Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.824821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.926917 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.950034 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8pfh4"] Feb 16 13:09:37 crc kubenswrapper[4740]: I0216 13:09:37.951071 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hclws" podStartSLOduration=2.951049703 podStartE2EDuration="2.951049703s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:37.864706946 +0000 UTC m=+1005.241055677" watchObservedRunningTime="2026-02-16 13:09:37.951049703 +0000 UTC m=+1005.327398434" Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.577271 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.830154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"ef863733ff531229caffac8488a7e74d8977b0c756b3b6496a0497807187e742"} Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.839725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerStarted","Data":"f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21"} Feb 16 13:09:38 crc kubenswrapper[4740]: I0216 13:09:38.865642 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7lg27" podStartSLOduration=4.142222816 podStartE2EDuration="35.865622421s" podCreationTimestamp="2026-02-16 13:09:03 +0000 UTC" firstStartedPulling="2026-02-16 13:09:05.112910171 +0000 UTC m=+972.489258892" lastFinishedPulling="2026-02-16 13:09:36.836309756 +0000 UTC m=+1004.212658497" observedRunningTime="2026-02-16 13:09:38.857682382 +0000 UTC m=+1006.234031113" watchObservedRunningTime="2026-02-16 13:09:38.865622421 +0000 UTC m=+1006.241971142" Feb 16 13:09:39 crc kubenswrapper[4740]: I0216 13:09:39.297502 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ce9a30-45a5-40c6-a259-00a790928e07" path="/var/lib/kubelet/pods/a4ce9a30-45a5-40c6-a259-00a790928e07/volumes" Feb 16 13:09:39 crc kubenswrapper[4740]: I0216 13:09:39.851219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerStarted","Data":"0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15"} Feb 16 13:09:40 crc kubenswrapper[4740]: I0216 13:09:40.861262 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:40 crc kubenswrapper[4740]: I0216 13:09:40.890487 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" podStartSLOduration=5.890467207 podStartE2EDuration="5.890467207s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:09:40.882168315 +0000 UTC m=+1008.258517036" watchObservedRunningTime="2026-02-16 13:09:40.890467207 +0000 UTC m=+1008.266815918" Feb 16 13:09:41 crc kubenswrapper[4740]: I0216 13:09:41.883387 4740 generic.go:334] "Generic (PLEG): container finished" podID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerID="b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51" exitCode=0 Feb 16 13:09:41 crc kubenswrapper[4740]: I0216 13:09:41.883462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerDied","Data":"b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51"} Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.145827 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.184848 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.187068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.193397 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.204446 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.252336 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268554 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.268962 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269405 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.269436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.278586 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.286449 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.292501 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.389948 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390416 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.390887 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.391578 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.392448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.392956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.393325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.401053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.403895 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.406672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.409092 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"horizon-5476559f6b-jvkbv\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493830 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.493946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.494892 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-scripts\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.495245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-logs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.496074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-config-data\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.497265 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-combined-ca-bundle\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.498012 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-secret-key\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.513063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-horizon-tls-certs\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.516263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbld\" (UniqueName: \"kubernetes.io/projected/add1eb0e-dbfc-463a-b676-3e2e2b1f478d-kube-api-access-cxbld\") pod \"horizon-56b9fd8c4d-crftf\" (UID: \"add1eb0e-dbfc-463a-b676-3e2e2b1f478d\") " pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.522331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:09:44 crc kubenswrapper[4740]: I0216 13:09:44.612310 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:09:45 crc kubenswrapper[4740]: I0216 13:09:45.986856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.056035 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.056318 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" containerID="cri-o://9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" gracePeriod=10 Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.922475 4740 generic.go:334] "Generic (PLEG): container finished" podID="56781f2b-b49d-4234-981b-a01a10dfab05" containerID="9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" exitCode=0 Feb 16 13:09:46 crc kubenswrapper[4740]: I0216 13:09:46.922564 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597"} Feb 16 13:09:50 crc kubenswrapper[4740]: I0216 13:09:50.937083 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 16 13:09:51 crc kubenswrapper[4740]: I0216 13:09:51.982878 4740 generic.go:334] "Generic (PLEG): container finished" podID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerID="f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21" exitCode=0 Feb 16 13:09:51 crc kubenswrapper[4740]: I0216 13:09:51.982945 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerDied","Data":"f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21"} Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.294641 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.294919 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbfh699h67dh578h87h7h565hcfhd9h577hchbfh55ch8dh5cfh686h567h7fh698h564h58fh544h576hc5hd7h688h5f6h594h5d5h57bh589h5cdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96l6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-ffb796745-6csq7_openstack(1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:09:52 crc kubenswrapper[4740]: E0216 13:09:52.297299 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-ffb796745-6csq7" podUID="1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.392826 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535377 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535459 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535682 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.535722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") pod \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\" (UID: \"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0\") " Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.541302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf" (OuterVolumeSpecName: "kube-api-access-z7xlf") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "kube-api-access-z7xlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.542044 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts" (OuterVolumeSpecName: "scripts") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.542462 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.544038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.564795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data" (OuterVolumeSpecName: "config-data") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.568168 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" (UID: "93a2e5d2-cd7f-48d9-87ff-89475b5eeee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638692 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638730 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7xlf\" (UniqueName: \"kubernetes.io/projected/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-kube-api-access-z7xlf\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638746 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638759 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638770 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.638781 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5jrj" event={"ID":"93a2e5d2-cd7f-48d9-87ff-89475b5eeee0","Type":"ContainerDied","Data":"c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259"} Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992976 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5jrj" Feb 16 13:09:52 crc kubenswrapper[4740]: I0216 13:09:52.992995 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c657a21dae771240bee3b0cd9defff5af7e3b2b2630e7da79f1d768c900bf259" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.466989 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.474196 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l5jrj"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.583671 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:09:53 crc kubenswrapper[4740]: E0216 13:09:53.584408 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.584439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.584734 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" containerName="keystone-bootstrap" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.585641 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.589899 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590156 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.590785 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nljh" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.610014 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.657336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658079 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658481 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.658664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.659262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.659566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.761780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.768677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.769684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.771261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.773831 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.778163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.781427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"keystone-bootstrap-lxgpl\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:53 crc kubenswrapper[4740]: I0216 13:09:53.910703 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:09:55 crc kubenswrapper[4740]: I0216 13:09:55.293711 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93a2e5d2-cd7f-48d9-87ff-89475b5eeee0" path="/var/lib/kubelet/pods/93a2e5d2-cd7f-48d9-87ff-89475b5eeee0/volumes" Feb 16 13:09:55 crc kubenswrapper[4740]: I0216 13:09:55.937638 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 16 13:09:57 crc kubenswrapper[4740]: I0216 13:09:57.048616 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerID="afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf" exitCode=0 Feb 16 13:09:57 crc kubenswrapper[4740]: I0216 13:09:57.048713 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerDied","Data":"afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf"} Feb 16 13:10:00 crc kubenswrapper[4740]: E0216 13:10:00.128269 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 13:10:00 crc kubenswrapper[4740]: E0216 13:10:00.128683 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n557h55fh97h596h5b4hfh6dhb6h57dh555h9ch598h568h555h5d9h5f9h8dh59h5c9h566h5c4h5dfh5dbh66fh96h668h67ch9bh57ch5c6h68fh546q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ks2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e23974e9-800c-4295-8f84-89b4052280cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.217414 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.218958 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.227155 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292442 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292612 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") pod \"2c41d146-de9f-4d90-bb9e-6c12fc832650\" (UID: \"2c41d146-de9f-4d90-bb9e-6c12fc832650\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292641 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292686 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292728 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") pod \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\" (UID: \"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.292793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") pod \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\" (UID: \"f092c8c4-9a32-4093-9a5c-bc5fd05d600e\") " Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.293361 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts" (OuterVolumeSpecName: "scripts") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.294447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data" (OuterVolumeSpecName: "config-data") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.297220 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs" (OuterVolumeSpecName: "logs") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m" (OuterVolumeSpecName: "kube-api-access-gkb5m") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "kube-api-access-gkb5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298240 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.298364 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b" (OuterVolumeSpecName: "kube-api-access-9dr7b") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "kube-api-access-9dr7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.299782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h" (OuterVolumeSpecName: "kube-api-access-96l6h") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "kube-api-access-96l6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.301658 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" (UID: "1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.318471 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.323461 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config" (OuterVolumeSpecName: "config") pod "2c41d146-de9f-4d90-bb9e-6c12fc832650" (UID: "2c41d146-de9f-4d90-bb9e-6c12fc832650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.335698 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.355966 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data" (OuterVolumeSpecName: "config-data") pod "f092c8c4-9a32-4093-9a5c-bc5fd05d600e" (UID: "f092c8c4-9a32-4093-9a5c-bc5fd05d600e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394685 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394725 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dr7b\" (UniqueName: \"kubernetes.io/projected/2c41d146-de9f-4d90-bb9e-6c12fc832650-kube-api-access-9dr7b\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394737 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394747 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb5m\" (UniqueName: \"kubernetes.io/projected/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-kube-api-access-gkb5m\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394755 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394764 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f092c8c4-9a32-4093-9a5c-bc5fd05d600e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394775 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394785 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c41d146-de9f-4d90-bb9e-6c12fc832650-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394794 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394836 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394847 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96l6h\" (UniqueName: \"kubernetes.io/projected/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-kube-api-access-96l6h\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:00 crc kubenswrapper[4740]: I0216 13:10:00.394854 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083025 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hclws" event={"ID":"2c41d146-de9f-4d90-bb9e-6c12fc832650","Type":"ContainerDied","Data":"1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083288 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1239b64d9a447a462a223a1b7c32aba3f38a7dc5920e0fbd6eb28d0ea7f33a2d" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.083047 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hclws" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.084078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffb796745-6csq7" event={"ID":"1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93","Type":"ContainerDied","Data":"4de65a95aeb197276b59d3580d2d47362fd491a1ba955970f286211155c099b7"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.084199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffb796745-6csq7" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7lg27" event={"ID":"f092c8c4-9a32-4093-9a5c-bc5fd05d600e","Type":"ContainerDied","Data":"b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199"} Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089085 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b442dc0fb308c9b04f285691fbee4ec7e1095364eb70b6a8b176006c1f1a9199" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.089097 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7lg27" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.151042 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.160597 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ffb796745-6csq7"] Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.249411 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.249605 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4fvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2hxgr_openstack(6e6806e6-e7ab-40bb-a703-0f4bfe131539): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.251570 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2hxgr" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.299052 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93" path="/var/lib/kubelet/pods/1d3f831f-b6c1-4f65-85cb-e5ce8ffc3f93/volumes" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.352040 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518724 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518929 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.518961 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.519009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.519027 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") pod \"56781f2b-b49d-4234-981b-a01a10dfab05\" (UID: \"56781f2b-b49d-4234-981b-a01a10dfab05\") " Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.520933 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521359 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521372 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521385 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="init" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521391 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="init" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521404 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521418 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: E0216 13:10:01.521432 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521438 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521603 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521625 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" containerName="neutron-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.521634 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" containerName="glance-db-sync" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.522564 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.566960 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn" (OuterVolumeSpecName: "kube-api-access-724rn") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "kube-api-access-724rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.579274 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.623345 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.639620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.640077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654219 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654383 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.654737 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724rn\" (UniqueName: \"kubernetes.io/projected/56781f2b-b49d-4234-981b-a01a10dfab05-kube-api-access-724rn\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.695091 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.697291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711151 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hg2gh" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711386 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.711524 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.713559 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.717956 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758872 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.758911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.759510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.759929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.760172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.760680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.761230 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.851828 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"dnsmasq-dns-6664c6795f-qqrmv\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.883937 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.884035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.884161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.930097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.968184 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.976469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.977624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config" (OuterVolumeSpecName: "config") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.985891 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:01 crc kubenswrapper[4740]: I0216 13:10:01.986164 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.003179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.047734 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.050749 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.050768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.060909 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.065666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.066544 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.070331 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090253 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090442 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090543 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.090567 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.093709 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.098265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.103531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"neutron-84f8c7948d-wxf52\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" event={"ID":"56781f2b-b49d-4234-981b-a01a10dfab05","Type":"ContainerDied","Data":"07254c2d9a89687f122ec073a09001283bb6a4e93052f9c40534b8fe35f0fdbf"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105649 4740 scope.go:117] "RemoveContainer" containerID="9b0f87ae4e501b819b69299fe6b0bca555aa557876b95fd376763eb368141597" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.105842 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.113086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56781f2b-b49d-4234-981b-a01a10dfab05" (UID: "56781f2b-b49d-4234-981b-a01a10dfab05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.129400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerStarted","Data":"1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.156680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"06bf25d33138128d15c50d580ea5273787a0565c881e9530d7786cb52837cf0e"} Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.165393 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dlcqm" podStartSLOduration=2.6279626069999997 podStartE2EDuration="27.165371544s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.652689619 +0000 UTC m=+1004.029038340" lastFinishedPulling="2026-02-16 13:10:01.190098556 +0000 UTC m=+1028.566447277" observedRunningTime="2026-02-16 13:10:02.160761189 +0000 UTC m=+1029.537109910" watchObservedRunningTime="2026-02-16 13:10:02.165371544 +0000 UTC m=+1029.541720265" Feb 16 13:10:02 crc kubenswrapper[4740]: E0216 13:10:02.167187 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2hxgr" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.171490 4740 scope.go:117] "RemoveContainer" containerID="96de88cee058193affe71534965c618fbce9086a5fd824cc8ef53366e9b1cf91" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.174397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.192264 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.193988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194087 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194408 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.194426 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56781f2b-b49d-4234-981b-a01a10dfab05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.196407 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.197668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.204055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.204709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.205171 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.230481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"dnsmasq-dns-5ccc5c4795-cswgq\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.319082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.431570 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b9fd8c4d-crftf"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.483413 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.488691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.508530 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-gghds"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.783079 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.970287 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.972217 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978150 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978362 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nblft" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.978622 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:10:02 crc kubenswrapper[4740]: I0216 13:10:02.986002 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.122092 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.129569 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135422 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135525 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.135649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.136185 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.178217 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.193036 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.198877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.198925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerStarted","Data":"217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.199038 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58778bbcc-2dwkc" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" containerID="cri-o://217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.199634 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58778bbcc-2dwkc" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" containerID="cri-o://126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.212472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerStarted","Data":"8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.219852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.219936 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerStarted","Data":"99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.220076 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-dfc4b7997-nx6ww" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" containerID="cri-o://99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.220335 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-dfc4b7997-nx6ww" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" containerID="cri-o://6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" gracePeriod=30 Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.234022 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.234068 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerStarted","Data":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.236922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.239248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerStarted","Data":"dd64b688e1590f3434f39d4272a1ad6a38ca58e66991330aef14f81fd719fdd1"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240424 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240705 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240765 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.240898 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.242342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.242706 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.248780 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.253914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.254836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"09cd750e98b519f01b27e16cb426575348a5747cf9fd7e48e3599460511afb8a"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.254884 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"c5674c9e734a1860b42445df73308b8cae2af736059f1bf52d50e16bca7865c2"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.255150 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.257445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.257529 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.263959 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerStarted","Data":"a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.264012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerStarted","Data":"ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400"} Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.274521 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58778bbcc-2dwkc" podStartSLOduration=3.601875096 podStartE2EDuration="26.274499695s" podCreationTimestamp="2026-02-16 13:09:37 +0000 UTC" firstStartedPulling="2026-02-16 13:09:38.60051582 +0000 UTC m=+1005.976864541" lastFinishedPulling="2026-02-16 13:10:01.273140409 +0000 UTC m=+1028.649489140" observedRunningTime="2026-02-16 13:10:03.224552123 +0000 UTC m=+1030.600900844" watchObservedRunningTime="2026-02-16 13:10:03.274499695 +0000 UTC m=+1030.650848416" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.297220 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-dfc4b7997-nx6ww" podStartSLOduration=4.767074478 podStartE2EDuration="28.297198219s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.582869752 +0000 UTC m=+1003.959218473" lastFinishedPulling="2026-02-16 13:10:00.112993493 +0000 UTC m=+1027.489342214" observedRunningTime="2026-02-16 13:10:03.248510907 +0000 UTC m=+1030.624859638" watchObservedRunningTime="2026-02-16 13:10:03.297198219 +0000 UTC m=+1030.673546940" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.300658 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.310619 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d9rnm" podStartSLOduration=5.055232885 podStartE2EDuration="28.310602141s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.848945494 +0000 UTC m=+1004.225294215" lastFinishedPulling="2026-02-16 13:10:00.10431475 +0000 UTC m=+1027.480663471" observedRunningTime="2026-02-16 13:10:03.269102435 +0000 UTC m=+1030.645451156" watchObservedRunningTime="2026-02-16 13:10:03.310602141 +0000 UTC m=+1030.686950862" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.319585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.321096 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lxgpl" podStartSLOduration=10.321008688 podStartE2EDuration="10.321008688s" podCreationTimestamp="2026-02-16 13:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:03.295376061 +0000 UTC m=+1030.671724812" watchObservedRunningTime="2026-02-16 13:10:03.321008688 +0000 UTC m=+1030.697357409" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.330507 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5476559f6b-jvkbv" podStartSLOduration=19.330489287 podStartE2EDuration="19.330489287s" podCreationTimestamp="2026-02-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:03.32485955 +0000 UTC m=+1030.701208281" watchObservedRunningTime="2026-02-16 13:10:03.330489287 +0000 UTC m=+1030.706838008" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342470 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342551 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342588 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.342764 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.343592 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.344563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.345365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.356216 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.356505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.364206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.378961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.388000 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" path="/var/lib/kubelet/pods/56781f2b-b49d-4234-981b-a01a10dfab05/volumes" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.418725 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.501343 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:03 crc kubenswrapper[4740]: I0216 13:10:03.758249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.271602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerStarted","Data":"c58ac0d9aec7a38c3ff27fbc2a9071447407d2f7a15c831455eefd159f4d45ac"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.274895 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b9fd8c4d-crftf" event={"ID":"add1eb0e-dbfc-463a-b676-3e2e2b1f478d","Type":"ContainerStarted","Data":"17ea669b2eb8ab74dc8b09119345d0a52be439c8e372e208e8672fcab2a13e40"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.276089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"1506e84718fd32b51421d0c20379f7ab72db7c5398d1bae1271890a3cd491379"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.277387 4740 generic.go:334] "Generic (PLEG): container finished" podID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerID="68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd" exitCode=0 Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.277483 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerDied","Data":"68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd"} Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.302909 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56b9fd8c4d-crftf" podStartSLOduration=20.302883854 podStartE2EDuration="20.302883854s" podCreationTimestamp="2026-02-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:04.296305548 +0000 UTC m=+1031.672654269" watchObservedRunningTime="2026-02-16 13:10:04.302883854 +0000 UTC m=+1031.679232575" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.522983 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:04 crc kubenswrapper[4740]: I0216 13:10:04.523051 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:04.614482 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:04.614791 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.713787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825729 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.825870 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") pod \"9d3f4b10-353c-4963-96e9-c5e178df6c03\" (UID: \"9d3f4b10-353c-4963-96e9-c5e178df6c03\") " Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.828082 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.839379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd" (OuterVolumeSpecName: "kube-api-access-dj5qd") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "kube-api-access-dj5qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.869539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.904433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config" (OuterVolumeSpecName: "config") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.910333 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.911648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928409 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928431 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928442 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928450 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.928459 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj5qd\" (UniqueName: \"kubernetes.io/projected/9d3f4b10-353c-4963-96e9-c5e178df6c03-kube-api-access-dj5qd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.937283 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-gghds" podUID="56781f2b-b49d-4234-981b-a01a10dfab05" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:05.961200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d3f4b10-353c-4963-96e9-c5e178df6c03" (UID: "9d3f4b10-353c-4963-96e9-c5e178df6c03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.029773 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3f4b10-353c-4963-96e9-c5e178df6c03-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.296367 4740 generic.go:334] "Generic (PLEG): container finished" podID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerID="e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03" exitCode=0 Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.296414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664c6795f-qqrmv" event={"ID":"9d3f4b10-353c-4963-96e9-c5e178df6c03","Type":"ContainerDied","Data":"dd64b688e1590f3434f39d4272a1ad6a38ca58e66991330aef14f81fd719fdd1"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.298218 4740 scope.go:117] "RemoveContainer" containerID="68bfd2d35765947f76613629bb92c60e3bbb553e79f1b9b529dd9f256f7a1ccd" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.313748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.313797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerStarted","Data":"1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.314127 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.321824 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.329109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7"} Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.424326 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.452887 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6664c6795f-qqrmv"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.468516 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.478773 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84f8c7948d-wxf52" podStartSLOduration=5.478753041 podStartE2EDuration="5.478753041s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:06.453799696 +0000 UTC m=+1033.830148417" watchObservedRunningTime="2026-02-16 13:10:06.478753041 +0000 UTC m=+1033.855101762" Feb 16 13:10:06 crc kubenswrapper[4740]: I0216 13:10:06.953210 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:06 crc kubenswrapper[4740]: W0216 13:10:06.956131 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd2ae56_b6b0_4b08_8dba_62e7b9f816e6.slice/crio-99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d WatchSource:0}: Error finding container 99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d: Status 404 returned error can't find the container with id 99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.354320 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" path="/var/lib/kubelet/pods/9d3f4b10-353c-4963-96e9-c5e178df6c03/volumes" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.374149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerStarted","Data":"d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c"} Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.374227 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.419423 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podStartSLOduration=6.41940225 podStartE2EDuration="6.41940225s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:07.404392748 +0000 UTC m=+1034.780741469" watchObservedRunningTime="2026-02-16 13:10:07.41940225 +0000 UTC m=+1034.795750971" Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.456117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d"} Feb 16 13:10:07 crc kubenswrapper[4740]: I0216 13:10:07.825560 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.056869 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.069798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:08 crc kubenswrapper[4740]: E0216 13:10:08.070588 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.070614 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.070984 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3f4b10-353c-4963-96e9-c5e178df6c03" containerName="init" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.072165 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.077961 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.078322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.085421 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:08 crc kubenswrapper[4740]: W0216 13:10:08.107052 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44bcd77c_cccb_42d5_9cff_81c0c63bd919.slice/crio-fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913 WatchSource:0}: Error finding container fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913: Status 404 returned error can't find the container with id fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913 Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209674 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209756 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.209893 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312697 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.312795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.317828 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.321564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.322387 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.323285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.327299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.337365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.339508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"neutron-c64d8f89f-pfmqj\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.462027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.471288 4740 generic.go:334] "Generic (PLEG): container finished" podID="c1263236-13e5-4a79-b19a-96f535ae0783" containerID="a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80" exitCode=0 Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.471367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerDied","Data":"a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80"} Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.480996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913"} Feb 16 13:10:08 crc kubenswrapper[4740]: I0216 13:10:08.490242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.257150 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:09 crc kubenswrapper[4740]: W0216 13:10:09.272869 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd15191d_cc73_4274_b185_d3572e5deac0.slice/crio-09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971 WatchSource:0}: Error finding container 09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971: Status 404 returned error can't find the container with id 09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.527631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.531214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.541058 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerID="8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e" exitCode=0 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.541101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerDied","Data":"8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.543490 4740 generic.go:334] "Generic (PLEG): container finished" podID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerID="1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5" exitCode=0 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.543549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerDied","Data":"1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547527 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" containerID="cri-o://3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" gracePeriod=30 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerStarted","Data":"d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a"} Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.547974 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" containerID="cri-o://d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" gracePeriod=30 Feb 16 13:10:09 crc kubenswrapper[4740]: I0216 13:10:09.617057 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.616924331 podStartE2EDuration="8.616924331s" podCreationTimestamp="2026-02-16 13:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:09.615070773 +0000 UTC m=+1036.991419494" watchObservedRunningTime="2026-02-16 13:10:09.616924331 +0000 UTC m=+1036.993273052" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.381167 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414153 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414342 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.414392 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") pod \"c1263236-13e5-4a79-b19a-96f535ae0783\" (UID: \"c1263236-13e5-4a79-b19a-96f535ae0783\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.424136 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k" (OuterVolumeSpecName: "kube-api-access-qq56k") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "kube-api-access-qq56k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.430321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts" (OuterVolumeSpecName: "scripts") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.456989 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.479226 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517924 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq56k\" (UniqueName: \"kubernetes.io/projected/c1263236-13e5-4a79-b19a-96f535ae0783-kube-api-access-qq56k\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517978 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.517990 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.567866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573056 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lxgpl" event={"ID":"c1263236-13e5-4a79-b19a-96f535ae0783","Type":"ContainerDied","Data":"ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573356 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab97a3afcc64598d45c5a6410abb040f79933e08b088c704a4dcf8afb6ae8400" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.573415 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lxgpl" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576599 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerID="d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" exitCode=0 Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576634 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerID="3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" exitCode=143 Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576917 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6","Type":"ContainerDied","Data":"99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d"} Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576927 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99fb1d089f391a93e3c09d2b4743d51995486f2e09db0f28a0d5bd20b738ee5d" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.576996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.598962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data" (OuterVolumeSpecName: "config-data") pod "c1263236-13e5-4a79-b19a-96f535ae0783" (UID: "c1263236-13e5-4a79-b19a-96f535ae0783"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.619361 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.619389 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1263236-13e5-4a79-b19a-96f535ae0783-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.630185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.689659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690299 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690320 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690336 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690344 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: E0216 13:10:10.690380 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690388 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690598 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-log" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690611 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" containerName="glance-httpd" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.690629 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" containerName="keystone-bootstrap" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.691368 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.694142 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.694834 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.736963 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822546 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822858 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.822991 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") pod \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\" (UID: \"2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6\") " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823386 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.823892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.825217 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.832969 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs" (OuterVolumeSpecName: "logs") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.836897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.837112 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts" (OuterVolumeSpecName: "scripts") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.872339 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd" (OuterVolumeSpecName: "kube-api-access-9j2fd") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "kube-api-access-9j2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.878165 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929462 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929497 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929530 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929590 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929879 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929923 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929934 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929957 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.929967 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2fd\" (UniqueName: \"kubernetes.io/projected/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-kube-api-access-9j2fd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.935146 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.942209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-credential-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.953357 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-public-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.958398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-fernet-keys\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.961650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-config-data\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.966004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-combined-ca-bundle\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.967029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-scripts\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.973191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f7n5\" (UniqueName: \"kubernetes.io/projected/e68475b5-404f-48fc-a05a-ea18135e837c-kube-api-access-6f7n5\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.973369 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data" (OuterVolumeSpecName: "config-data") pod "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" (UID: "2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:10 crc kubenswrapper[4740]: I0216 13:10:10.975342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68475b5-404f-48fc-a05a-ea18135e837c-internal-tls-certs\") pod \"keystone-5cc7d69b6f-dmv77\" (UID: \"e68475b5-404f-48fc-a05a-ea18135e837c\") " pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:10.997681 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.037824 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.037859 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.047215 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.234761 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240406 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240866 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.240997 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.241048 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") pod \"2fa1d954-018c-45a1-93e6-149318cdda8c\" (UID: \"2fa1d954-018c-45a1-93e6-149318cdda8c\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.242742 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs" (OuterVolumeSpecName: "logs") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.245314 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.250863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp" (OuterVolumeSpecName: "kube-api-access-cv6qp") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "kube-api-access-cv6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.250914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts" (OuterVolumeSpecName: "scripts") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.268329 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.313434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data" (OuterVolumeSpecName: "config-data") pod "2fa1d954-018c-45a1-93e6-149318cdda8c" (UID: "2fa1d954-018c-45a1-93e6-149318cdda8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.346954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") pod \"b63f4468-5c78-4dfd-a40a-302877eba3dc\" (UID: \"b63f4468-5c78-4dfd-a40a-302877eba3dc\") " Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347428 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347455 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa1d954-018c-45a1-93e6-149318cdda8c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347465 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6qp\" (UniqueName: \"kubernetes.io/projected/2fa1d954-018c-45a1-93e6-149318cdda8c-kube-api-access-cv6qp\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347474 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.347482 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa1d954-018c-45a1-93e6-149318cdda8c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.383111 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.392159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.394694 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2" (OuterVolumeSpecName: "kube-api-access-jd2x2") pod "b63f4468-5c78-4dfd-a40a-302877eba3dc" (UID: "b63f4468-5c78-4dfd-a40a-302877eba3dc"). InnerVolumeSpecName "kube-api-access-jd2x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449727 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd2x2\" (UniqueName: \"kubernetes.io/projected/b63f4468-5c78-4dfd-a40a-302877eba3dc-kube-api-access-jd2x2\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449772 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.449784 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b63f4468-5c78-4dfd-a40a-302877eba3dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.627886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerStarted","Data":"82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.628086 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" containerID="cri-o://cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" gracePeriod=30 Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.628599 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" containerID="cri-o://82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" gracePeriod=30 Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d9rnm" event={"ID":"2fa1d954-018c-45a1-93e6-149318cdda8c","Type":"ContainerDied","Data":"9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634390 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1b4bed7708a93e0a35b8ec7450ea513f432b2711fa0f1f62f6fa444b6ade25" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.634444 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d9rnm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638501 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dlcqm" event={"ID":"b63f4468-5c78-4dfd-a40a-302877eba3dc","Type":"ContainerDied","Data":"b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638555 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7811f49189bed88b6538a8117cf752349ac762f9fdcabeb27482bf9fb8a222e" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.638634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dlcqm" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.649901 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.653639 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerStarted","Data":"b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91"} Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.654030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.693714 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.693695173 podStartE2EDuration="9.693695173s" podCreationTimestamp="2026-02-16 13:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:11.669110258 +0000 UTC m=+1039.045458979" watchObservedRunningTime="2026-02-16 13:10:11.693695173 +0000 UTC m=+1039.070043894" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.712239 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cc7d69b6f-dmv77"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.749797 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.791074 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.797592 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: E0216 13:10:11.798231 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798254 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: E0216 13:10:11.798274 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798280 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798923 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" containerName="placement-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.798948 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" containerName="barbican-db-sync" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.802638 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.808282 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.809632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.809793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.820972 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c64d8f89f-pfmqj" podStartSLOduration=3.820954137 podStartE2EDuration="3.820954137s" podCreationTimestamp="2026-02-16 13:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:11.743451233 +0000 UTC m=+1039.119799964" watchObservedRunningTime="2026-02-16 13:10:11.820954137 +0000 UTC m=+1039.197302858" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.857635 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.905625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.910528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.914920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.914999 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915580 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.915618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.947269 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.951149 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.968451 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.968706 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77m5" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.977332 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:10:11 crc kubenswrapper[4740]: I0216 13:10:11.983580 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017834 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.017992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.018410 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.027358 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.028214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.033004 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.039349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.046921 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.047522 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.048428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.103859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.108124 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.114200 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119188 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119260 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.119321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120277 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120382 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120446 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.120639 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.155632 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.177932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.181216 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.184875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188303 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188494 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188693 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.188806 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.189093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bjvkq" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.190154 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222071 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222318 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222429 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222734 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.222910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223289 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.223838 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3550143-6df6-42d0-b18a-8b6275eac907-logs\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.226207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30b251e5-1979-41ad-ad86-efebb5e6a240-logs\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.227970 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-combined-ca-bundle\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.232444 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.238229 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-combined-ca-bundle\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.240279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.244687 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30b251e5-1979-41ad-ad86-efebb5e6a240-config-data-custom\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3550143-6df6-42d0-b18a-8b6275eac907-config-data-custom\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259494 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85g5\" (UniqueName: \"kubernetes.io/projected/30b251e5-1979-41ad-ad86-efebb5e6a240-kube-api-access-r85g5\") pod \"barbican-keystone-listener-758fd9dd8b-46z5m\" (UID: \"30b251e5-1979-41ad-ad86-efebb5e6a240\") " pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.259682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69d2v\" (UniqueName: \"kubernetes.io/projected/c3550143-6df6-42d0-b18a-8b6275eac907-kube-api-access-69d2v\") pod \"barbican-worker-6f4698b555-qswqc\" (UID: \"c3550143-6df6-42d0-b18a-8b6275eac907\") " pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.274021 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.274290 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" containerID="cri-o://d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" gracePeriod=10 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.278001 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.305469 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.307043 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324761 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.324988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.325047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.339733 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.349644 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.351960 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.355716 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.378745 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.383466 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.414613 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f4698b555-qswqc" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426380 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426476 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426541 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.426793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.427589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/983c874c-3b25-49df-82cb-b3dfaf1db7ac-logs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.435692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-config-data\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.438273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-public-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.439542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-combined-ca-bundle\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.440923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-internal-tls-certs\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.454681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9t5x\" (UniqueName: \"kubernetes.io/projected/983c874c-3b25-49df-82cb-b3dfaf1db7ac-kube-api-access-d9t5x\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.457224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/983c874c-3b25-49df-82cb-b3dfaf1db7ac-scripts\") pod \"placement-758758df44-4g6db\" (UID: \"983c874c-3b25-49df-82cb-b3dfaf1db7ac\") " pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.489678 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.503468 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528321 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528357 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528390 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528604 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.528633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.530898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.531708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.533308 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.533993 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.535891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.536436 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.545623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.547470 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.552150 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.556523 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"dnsmasq-dns-688c87cc99-qxpg7\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.566626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"barbican-api-58b5794cfd-4trjb\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.675620 4740 generic.go:334] "Generic (PLEG): container finished" podID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerID="d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" exitCode=0 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.675683 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.677137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cc7d69b6f-dmv77" event={"ID":"e68475b5-404f-48fc-a05a-ea18135e837c","Type":"ContainerStarted","Data":"50ee66e2092c893cf5f0f856cd7d76e6d102c3e7ac4bf3cc2a539e3dd9eb076b"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.677157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cc7d69b6f-dmv77" event={"ID":"e68475b5-404f-48fc-a05a-ea18135e837c","Type":"ContainerStarted","Data":"d6113a5f2c1f4af5d6b255160f0631ac5bd8d5b93cb3bf016b4358612678d41a"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.678281 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.691589 4740 generic.go:334] "Generic (PLEG): container finished" podID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerID="82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" exitCode=0 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.691622 4740 generic.go:334] "Generic (PLEG): container finished" podID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerID="cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" exitCode=143 Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.692383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.692442 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6"} Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.825427 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.838463 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.949798 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5cc7d69b6f-dmv77" podStartSLOduration=2.9497712849999997 podStartE2EDuration="2.949771285s" podCreationTimestamp="2026-02-16 13:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:12.700311308 +0000 UTC m=+1040.076660029" watchObservedRunningTime="2026-02-16 13:10:12.949771285 +0000 UTC m=+1040.326120006" Feb 16 13:10:12 crc kubenswrapper[4740]: I0216 13:10:12.973441 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.087901 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-758fd9dd8b-46z5m"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.092247 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202073 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202624 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.202911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.203004 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") pod \"e96a5e58-8096-4550-8a98-f47ad00622f8\" (UID: \"e96a5e58-8096-4550-8a98-f47ad00622f8\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.231831 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq" (OuterVolumeSpecName: "kube-api-access-gpscq") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "kube-api-access-gpscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.344682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpscq\" (UniqueName: \"kubernetes.io/projected/e96a5e58-8096-4550-8a98-f47ad00622f8-kube-api-access-gpscq\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.348456 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.398733 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.398285 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.421720 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6" path="/var/lib/kubelet/pods/2dd2ae56-b6b0-4b08-8dba-62e7b9f816e6/volumes" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.443043 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445787 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445823 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.445833 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.468633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config" (OuterVolumeSpecName: "config") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.489728 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e96a5e58-8096-4550-8a98-f47ad00622f8" (UID: "e96a5e58-8096-4550-8a98-f47ad00622f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546527 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546672 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546758 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546920 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.546965 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") pod \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\" (UID: \"44bcd77c-cccb-42d5-9cff-81c0c63bd919\") " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547479 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547499 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e96a5e58-8096-4550-8a98-f47ad00622f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.547945 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs" (OuterVolumeSpecName: "logs") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.548192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.578017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts" (OuterVolumeSpecName: "scripts") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.611018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.628027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv" (OuterVolumeSpecName: "kube-api-access-7z8mv") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "kube-api-access-7z8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650669 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z8mv\" (UniqueName: \"kubernetes.io/projected/44bcd77c-cccb-42d5-9cff-81c0c63bd919-kube-api-access-7z8mv\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650697 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650708 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650717 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44bcd77c-cccb-42d5-9cff-81c0c63bd919-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.650746 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.692999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f4698b555-qswqc"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.753714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data" (OuterVolumeSpecName: "config-data") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.756208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44bcd77c-cccb-42d5-9cff-81c0c63bd919" (UID: "44bcd77c-cccb-42d5-9cff-81c0c63bd919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.762114 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-758758df44-4g6db"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.793793 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 16 13:10:13 crc kubenswrapper[4740]: W0216 13:10:13.804174 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod983c874c_3b25_49df_82cb_b3dfaf1db7ac.slice/crio-4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e WatchSource:0}: Error finding container 4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e: Status 404 returned error can't find the container with id 4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.823951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"05535fb9b406eba46f389f3cb501f252adf4d6c36395ee05c09e7c3dc0a2cc74"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.841587 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"7fc52501a0de9e1866cab48c6e4c1a1ccfd8612bb6a0ab877e598a0179812bdf"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.853661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" event={"ID":"e96a5e58-8096-4550-8a98-f47ad00622f8","Type":"ContainerDied","Data":"c58ac0d9aec7a38c3ff27fbc2a9071447407d2f7a15c831455eefd159f4d45ac"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.854206 4740 scope.go:117] "RemoveContainer" containerID="d6be9db51cf7fad68c8cf132e91636d6d138c1b5511e518fd0f0ee9e6deb489c" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.854512 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cswgq" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857727 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857765 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44bcd77c-cccb-42d5-9cff-81c0c63bd919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.857777 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.957648 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.976614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"5b0e438309976f20e6cf23ad9e1052b831e4bb9fad154e2636f7ef4afee681a4"} Feb 16 13:10:13 crc kubenswrapper[4740]: I0216 13:10:13.999064 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cswgq"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.056021 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.057965 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44bcd77c-cccb-42d5-9cff-81c0c63bd919","Type":"ContainerDied","Data":"fed99df6851ac1bca844877c035ebdaefb841cde18c63d2d741a412b55f6e913"} Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.085454 4740 scope.go:117] "RemoveContainer" containerID="e872b72de1e58161869a094bde76919910717e78ea91f7910e54161886b8bc03" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.155509 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.196892 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.224338 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.236002 4740 scope.go:117] "RemoveContainer" containerID="82ace8655965ebda6188db4e8cdb8d5ed74c5ffb030764e27bbd3112a750363d" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.240474 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.240937 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="init" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.240964 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="init" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.240994 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241004 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.241024 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241032 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: E0216 13:10:14.241046 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241052 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241244 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-httpd" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241282 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" containerName="dnsmasq-dns" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.241304 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" containerName="glance-log" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.243766 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.253474 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.257797 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.296261 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:14 crc kubenswrapper[4740]: W0216 13:10:14.314115 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fd3ee_2093_4adb_a7af_23d05c718429.slice/crio-ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a WatchSource:0}: Error finding container ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a: Status 404 returned error can't find the container with id ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.320323 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.381859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.381956 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382150 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.382311 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.398962 4740 scope.go:117] "RemoveContainer" containerID="cb17528b3e08a30ed049f7a29a8451584aa334f7cdf4b1ee6208a4ee1d2f66b6" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483945 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483966 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.483991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.484042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.484410 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.491779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.492264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.501583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.503842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.503853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.505292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.524457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.527200 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.528843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.626974 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b9fd8c4d-crftf" podUID="add1eb0e-dbfc-463a-b676-3e2e2b1f478d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 16 13:10:14 crc kubenswrapper[4740]: I0216 13:10:14.667616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.092626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"8dca7c5d645d9c1b3cb27e141991a881de7063a8cfe75c04fc3f91921ff7f1b9"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"90595916ba48eea6e67347b7e520ad15b0e0e798381a72b531b7c7fe3509ca4a"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-758758df44-4g6db" event={"ID":"983c874c-3b25-49df-82cb-b3dfaf1db7ac","Type":"ContainerStarted","Data":"4d9eedc4d1d15201d773421cd11829d32e90f67816a00e8af2621718ce17f97e"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105779 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.105841 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.136114 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-758758df44-4g6db" podStartSLOduration=4.136094622 podStartE2EDuration="4.136094622s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:15.130796145 +0000 UTC m=+1042.507144866" watchObservedRunningTime="2026-02-16 13:10:15.136094622 +0000 UTC m=+1042.512443343" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.168329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.168379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"302a4fe2d789df7c6696f0d0599ddcf1f3c215a4ff9751a1692adaad334ff853"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176429 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" exitCode=0 Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.176605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerStarted","Data":"ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a"} Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.299614 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bcd77c-cccb-42d5-9cff-81c0c63bd919" path="/var/lib/kubelet/pods/44bcd77c-cccb-42d5-9cff-81c0c63bd919/volumes" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.301712 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e96a5e58-8096-4550-8a98-f47ad00622f8" path="/var/lib/kubelet/pods/e96a5e58-8096-4550-8a98-f47ad00622f8/volumes" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.460659 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.861695 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.871603 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.874764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.875279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.903648 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935851 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.935992 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936100 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:15 crc kubenswrapper[4740]: I0216 13:10:15.936134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038096 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.038250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.039573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/793c4693-2327-492b-9798-18501804cdf3-logs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.053174 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-public-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.058538 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-combined-ca-bundle\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.060552 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjbn\" (UniqueName: \"kubernetes.io/projected/793c4693-2327-492b-9798-18501804cdf3-kube-api-access-7cjbn\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.061709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.066156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-config-data-custom\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.068639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/793c4693-2327-492b-9798-18501804cdf3-internal-tls-certs\") pod \"barbican-api-5fbb5f795d-phd88\" (UID: \"793c4693-2327-492b-9798-18501804cdf3\") " pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.196536 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerStarted","Data":"6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0"} Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213865 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.213974 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:16 crc kubenswrapper[4740]: I0216 13:10:16.250337 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58b5794cfd-4trjb" podStartSLOduration=4.25031117 podStartE2EDuration="4.25031117s" podCreationTimestamp="2026-02-16 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:16.239570011 +0000 UTC m=+1043.615918732" watchObservedRunningTime="2026-02-16 13:10:16.25031117 +0000 UTC m=+1043.626659891" Feb 16 13:10:19 crc kubenswrapper[4740]: I0216 13:10:19.249402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerStarted","Data":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} Feb 16 13:10:19 crc kubenswrapper[4740]: I0216 13:10:19.283952 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.283934497 podStartE2EDuration="8.283934497s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:19.275216771 +0000 UTC m=+1046.651565492" watchObservedRunningTime="2026-02-16 13:10:19.283934497 +0000 UTC m=+1046.660283218" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.186017 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.186479 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.226123 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.229998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280092 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"b1f805b3f42130f9ec256249b24cf294052db79347125cb78ef7bd761396a42c"} Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280265 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:10:22 crc kubenswrapper[4740]: I0216 13:10:22.280899 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.140659 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbb5f795d-phd88"] Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.328692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"d1b9cf7963a3448627aa82cb504e23edd58f758f49f5b3149dec36a2e9172b3c"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.368921 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerStarted","Data":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.369243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.382473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.407168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"53e561ac2bf980b00527b2cc4de4e2527253fabffa39158574ca33d75f33b933"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.416301 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"620fecfe559ab2f4b7e5556af99d2ed15a184002045d049dbcc46792310d32af"} Feb 16 13:10:23 crc kubenswrapper[4740]: I0216 13:10:23.429334 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" podStartSLOduration=11.429317663 podStartE2EDuration="11.429317663s" podCreationTimestamp="2026-02-16 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:23.423341976 +0000 UTC m=+1050.799690697" watchObservedRunningTime="2026-02-16 13:10:23.429317663 +0000 UTC m=+1050.805666384" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.429690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.438725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerStarted","Data":"451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"22edea436a040f2420273a2cfd8a123be2bfeac22a49bc2827fbdb65ebe6bcb5"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447088 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbb5f795d-phd88" event={"ID":"793c4693-2327-492b-9798-18501804cdf3","Type":"ContainerStarted","Data":"f0b4ea8abea1e9b7cb7f3c01467bf7d9c9be51328475c7abb750d294d2bee487"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447913 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.447944 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.450848 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" event={"ID":"30b251e5-1979-41ad-ad86-efebb5e6a240","Type":"ContainerStarted","Data":"a159fd47020e98397ec5d8f3691a344afdc0cf0e030443a1f96e58d082c9f6c2"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.455684 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2hxgr" podStartSLOduration=3.616129973 podStartE2EDuration="49.45566975s" podCreationTimestamp="2026-02-16 13:09:35 +0000 UTC" firstStartedPulling="2026-02-16 13:09:36.860492678 +0000 UTC m=+1004.236841399" lastFinishedPulling="2026-02-16 13:10:22.700032455 +0000 UTC m=+1050.076381176" observedRunningTime="2026-02-16 13:10:24.453404549 +0000 UTC m=+1051.829753270" watchObservedRunningTime="2026-02-16 13:10:24.45566975 +0000 UTC m=+1051.832018471" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.473398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f4698b555-qswqc" event={"ID":"c3550143-6df6-42d0-b18a-8b6275eac907","Type":"ContainerStarted","Data":"65c226b62f8f7960e09cabedf8ac39b2be809b6226aed6e9b63ac95c8c0b01e7"} Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.482959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-758fd9dd8b-46z5m" podStartSLOduration=3.934197555 podStartE2EDuration="13.48294164s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="2026-02-16 13:10:13.081425897 +0000 UTC m=+1040.457774618" lastFinishedPulling="2026-02-16 13:10:22.630169982 +0000 UTC m=+1050.006518703" observedRunningTime="2026-02-16 13:10:24.476165416 +0000 UTC m=+1051.852514137" watchObservedRunningTime="2026-02-16 13:10:24.48294164 +0000 UTC m=+1051.859290361" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.524254 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.538194 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f4698b555-qswqc" podStartSLOduration=4.544684308 podStartE2EDuration="13.538174962s" podCreationTimestamp="2026-02-16 13:10:11 +0000 UTC" firstStartedPulling="2026-02-16 13:10:13.706007474 +0000 UTC m=+1041.082356195" lastFinishedPulling="2026-02-16 13:10:22.699498128 +0000 UTC m=+1050.075846849" observedRunningTime="2026-02-16 13:10:24.529133387 +0000 UTC m=+1051.905482108" watchObservedRunningTime="2026-02-16 13:10:24.538174962 +0000 UTC m=+1051.914523683" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.541501 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fbb5f795d-phd88" podStartSLOduration=9.541487326 podStartE2EDuration="9.541487326s" podCreationTimestamp="2026-02-16 13:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:24.508700913 +0000 UTC m=+1051.885049644" watchObservedRunningTime="2026-02-16 13:10:24.541487326 +0000 UTC m=+1051.917836047" Feb 16 13:10:24 crc kubenswrapper[4740]: I0216 13:10:24.613000 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b9fd8c4d-crftf" podUID="add1eb0e-dbfc-463a-b676-3e2e2b1f478d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.496251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerStarted","Data":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.542094 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.542077151 podStartE2EDuration="11.542077151s" podCreationTimestamp="2026-02-16 13:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:25.532139087 +0000 UTC m=+1052.908487808" watchObservedRunningTime="2026-02-16 13:10:25.542077151 +0000 UTC m=+1052.918425872" Feb 16 13:10:25 crc kubenswrapper[4740]: I0216 13:10:25.872224 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:26 crc kubenswrapper[4740]: I0216 13:10:26.099479 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:26 crc kubenswrapper[4740]: I0216 13:10:26.137500 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:10:27 crc kubenswrapper[4740]: I0216 13:10:27.371901 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:10:29 crc kubenswrapper[4740]: I0216 13:10:29.547494 4740 generic.go:334] "Generic (PLEG): container finished" podID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerID="451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b" exitCode=0 Feb 16 13:10:29 crc kubenswrapper[4740]: I0216 13:10:29.547580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerDied","Data":"451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b"} Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.927502 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989367 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.989599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") pod \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\" (UID: \"6e6806e6-e7ab-40bb-a703-0f4bfe131539\") " Feb 16 13:10:31 crc kubenswrapper[4740]: I0216 13:10:31.990719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.010502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz" (OuterVolumeSpecName: "kube-api-access-c4fvz") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "kube-api-access-c4fvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.010570 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts" (OuterVolumeSpecName: "scripts") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.012962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.021733 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.043972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data" (OuterVolumeSpecName: "config-data") pod "6e6806e6-e7ab-40bb-a703-0f4bfe131539" (UID: "6e6806e6-e7ab-40bb-a703-0f4bfe131539"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091126 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091169 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091183 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091192 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4fvz\" (UniqueName: \"kubernetes.io/projected/6e6806e6-e7ab-40bb-a703-0f4bfe131539-kube-api-access-c4fvz\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091201 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e6806e6-e7ab-40bb-a703-0f4bfe131539-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.091209 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6806e6-e7ab-40bb-a703-0f4bfe131539-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.196269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.437455 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.437754 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" containerID="cri-o://c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" gracePeriod=30 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.438319 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" containerID="cri-o://b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" gracePeriod=30 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.461607 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-c64d8f89f-pfmqj" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": EOF" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.478622 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:32 crc kubenswrapper[4740]: E0216 13:10:32.483022 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.483059 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.483295 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" containerName="cinder-db-sync" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.484569 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.489277 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.580901 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2hxgr" event={"ID":"6e6806e6-e7ab-40bb-a703-0f4bfe131539","Type":"ContainerDied","Data":"09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6"} Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.580954 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ac2a81e51e0f54158edbd6cff4ecaee99212883c7807008289667a194afba6" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.581545 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2hxgr" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.605772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.605957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606152 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.606576 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.607123 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.710847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.710952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.711229 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.716190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-ovndb-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.716902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-combined-ca-bundle\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-httpd-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-internal-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.717315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-config\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.721360 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2e1871-02f7-4ff9-9987-054bf39f4418-public-tls-certs\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.732961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6sp8\" (UniqueName: \"kubernetes.io/projected/2d2e1871-02f7-4ff9-9987-054bf39f4418-kube-api-access-h6sp8\") pod \"neutron-7d8c67b945-9qhdf\" (UID: \"2d2e1871-02f7-4ff9-9987-054bf39f4418\") " pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.802545 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.828042 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.847168 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.908283 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.908544 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" containerID="cri-o://0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" gracePeriod=10 Feb 16 13:10:32 crc kubenswrapper[4740]: I0216 13:10:32.974727 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbb5f795d-phd88" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.044517 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.045322 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58b5794cfd-4trjb" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" containerID="cri-o://6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" gracePeriod=30 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.044984 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58b5794cfd-4trjb" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" containerID="cri-o://fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" gracePeriod=30 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.250889 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.252835 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.254487 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262048 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262096 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mtx8t" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262559 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.262726 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.323489 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.413756 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.418154 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.418285 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.426910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.426969 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.427163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.430227 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.436129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.437884 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.451409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.463485 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.465102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"cinder-scheduler-0\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.491861 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.493311 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.497655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.501866 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529344 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529594 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529765 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529852 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.529995 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.530059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.545418 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.620988 4740 generic.go:334] "Generic (PLEG): container finished" podID="40882b0a-c73f-4936-83f9-8bef1774c356" containerID="fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" exitCode=143 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.621174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.625925 4740 generic.go:334] "Generic (PLEG): container finished" podID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerID="0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" exitCode=0 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.626005 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631378 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerID="6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631404 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerID="99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631574 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631613 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631648 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631757 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631779 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631802 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.632267 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.631480 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.633278 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.633677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635238 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635675 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.635939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.636068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.642680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.642850 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643289 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.643329 4740 generic.go:334] "Generic (PLEG): container finished" podID="fd15191d-cc73-4274-b185-d3572e5deac0" containerID="b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" exitCode=0 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.644630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.648963 4740 generic.go:334] "Generic (PLEG): container finished" podID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerID="126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.648987 4740 generic.go:334] "Generic (PLEG): container finished" podID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerID="217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" exitCode=137 Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.649175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.649198 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42"} Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.653410 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"dnsmasq-dns-6bb4fc677f-hjtmw\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.653919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"cinder-api-0\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " pod="openstack/cinder-api-0" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.863791 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:33 crc kubenswrapper[4740]: I0216 13:10:33.881223 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.671280 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.671793 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.691650 4740 generic.go:334] "Generic (PLEG): container finished" podID="fd15191d-cc73-4274-b185-d3572e5deac0" containerID="c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" exitCode=0 Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.691690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a"} Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.749414 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.784130 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.894157 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:34 crc kubenswrapper[4740]: E0216 13:10:34.952470 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976431 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.976855 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.977045 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:34 crc kubenswrapper[4740]: I0216 13:10:34.977200 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") pod \"29b02cf4-52ac-4f68-a0de-83f62949ce16\" (UID: \"29b02cf4-52ac-4f68-a0de-83f62949ce16\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.002858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs" (OuterVolumeSpecName: "logs") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.009206 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.018489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj" (OuterVolumeSpecName: "kube-api-access-cccxj") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "kube-api-access-cccxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.018721 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data" (OuterVolumeSpecName: "config-data") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.073122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts" (OuterVolumeSpecName: "scripts") pod "29b02cf4-52ac-4f68-a0de-83f62949ce16" (UID: "29b02cf4-52ac-4f68-a0de-83f62949ce16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.075744 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.079775 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081553 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29b02cf4-52ac-4f68-a0de-83f62949ce16-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cccxj\" (UniqueName: \"kubernetes.io/projected/29b02cf4-52ac-4f68-a0de-83f62949ce16-kube-api-access-cccxj\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081594 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081602 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29b02cf4-52ac-4f68-a0de-83f62949ce16-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.081610 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b02cf4-52ac-4f68-a0de-83f62949ce16-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182523 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182634 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182654 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182927 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") pod \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\" (UID: \"fbc73a16-685a-4912-bec0-407ef2c7d3e9\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.182995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") pod \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\" (UID: \"3cd0546c-4e67-40e3-93c1-1aee20e6df48\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.186748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs" (OuterVolumeSpecName: "logs") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.193967 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc" (OuterVolumeSpecName: "kube-api-access-qcwhc") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "kube-api-access-qcwhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.194036 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.202079 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.202620 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f" (OuterVolumeSpecName: "kube-api-access-5k97f") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "kube-api-access-5k97f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.239896 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data" (OuterVolumeSpecName: "config-data") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.258534 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.278353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config" (OuterVolumeSpecName: "config") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.283671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts" (OuterVolumeSpecName: "scripts") pod "fbc73a16-685a-4912-bec0-407ef2c7d3e9" (UID: "fbc73a16-685a-4912-bec0-407ef2c7d3e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285346 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fbc73a16-685a-4912-bec0-407ef2c7d3e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285382 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285392 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285400 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc73a16-685a-4912-bec0-407ef2c7d3e9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285416 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcwhc\" (UniqueName: \"kubernetes.io/projected/fbc73a16-685a-4912-bec0-407ef2c7d3e9-kube-api-access-qcwhc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285426 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285435 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k97f\" (UniqueName: \"kubernetes.io/projected/3cd0546c-4e67-40e3-93c1-1aee20e6df48-kube-api-access-5k97f\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.285442 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc73a16-685a-4912-bec0-407ef2c7d3e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.303170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.315785 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.324177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cd0546c-4e67-40e3-93c1-1aee20e6df48" (UID: "3cd0546c-4e67-40e3-93c1-1aee20e6df48"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387051 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387080 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.387091 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cd0546c-4e67-40e3-93c1-1aee20e6df48-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.396974 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.433599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.488271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.488336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489525 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489659 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489714 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.489738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") pod \"fd15191d-cc73-4274-b185-d3572e5deac0\" (UID: \"fd15191d-cc73-4274-b185-d3572e5deac0\") " Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.496556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.507203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx" (OuterVolumeSpecName: "kube-api-access-q69zx") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "kube-api-access-q69zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.550247 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config" (OuterVolumeSpecName: "config") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.558499 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.567290 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600169 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600205 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.600217 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q69zx\" (UniqueName: \"kubernetes.io/projected/fd15191d-cc73-4274-b185-d3572e5deac0-kube-api-access-q69zx\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.607661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.642010 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d8c67b945-9qhdf"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.648517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.650554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.668627 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fd15191d-cc73-4274-b185-d3572e5deac0" (UID: "fd15191d-cc73-4274-b185-d3572e5deac0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701901 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701935 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701946 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.701957 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd15191d-cc73-4274-b185-d3572e5deac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703340 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dfc4b7997-nx6ww" event={"ID":"fbc73a16-685a-4912-bec0-407ef2c7d3e9","Type":"ContainerDied","Data":"ebb9cfe67450ca7ef04f47283c5bb6f822165573117c5716dd75a543c9096769"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703386 4740 scope.go:117] "RemoveContainer" containerID="6c46e84c5bdb3103f28f1cca092b9bf9c54cd61d61aebe4345cf80eca5a71fc3" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.703489 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dfc4b7997-nx6ww" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerStarted","Data":"d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721785 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721790 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" containerID="cri-o://1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721954 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" containerID="cri-o://d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.721960 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" containerID="cri-o://87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" gracePeriod=30 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.724859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"a30f8e0a38b39c3a533a61933daccfe7c9ac4a55dac7d23ebbd3bc31afe612c4"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.728003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"aac1339f667ba287f535c7aecd906d0a53a56e248b1a5a8045462961130dcc2b"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.737139 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.737159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-8v87v" event={"ID":"3cd0546c-4e67-40e3-93c1-1aee20e6df48","Type":"ContainerDied","Data":"c97c29b3334866f32db650fc18c5be9637e2474fd2a6df30fb7505a093dc5ffb"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.738870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"743e32c7253a8cc36fbf948e38784e79b55632de84992eaace05d9cb1e718286"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748109 4740 generic.go:334] "Generic (PLEG): container finished" podID="fecd834c-f149-401b-9c43-810e215a68ed" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" exitCode=0 Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.748295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerStarted","Data":"07d715fc080ad12a61c95f40dabacc8440c0cb90c7a33ab67e7813105918c946"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.753960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c64d8f89f-pfmqj" event={"ID":"fd15191d-cc73-4274-b185-d3572e5deac0","Type":"ContainerDied","Data":"09e37df47c206f35c05b1284ea0f7acd624391787074b2cc755f95b355d26971"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.754199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c64d8f89f-pfmqj" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.761496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58778bbcc-2dwkc" event={"ID":"29b02cf4-52ac-4f68-a0de-83f62949ce16","Type":"ContainerDied","Data":"ef863733ff531229caffac8488a7e74d8977b0c756b3b6496a0497807187e742"} Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.761576 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58778bbcc-2dwkc" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.762698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.762722 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.810707 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.854895 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-dfc4b7997-nx6ww"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.880579 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.888097 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58778bbcc-2dwkc"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.921507 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.930257 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-8v87v"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.937898 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.946298 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c64d8f89f-pfmqj"] Feb 16 13:10:35 crc kubenswrapper[4740]: I0216 13:10:35.980541 4740 scope.go:117] "RemoveContainer" containerID="99f292203fe1ed3c399eda55f9bf0dc36d25dfeda0396d3a1124005fb27b7059" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.012169 4740 scope.go:117] "RemoveContainer" containerID="0bd5e46d963bd8f9a4c9b7c3012c62f1a37d6c130636480a19dc95c0f26d9e15" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.045799 4740 scope.go:117] "RemoveContainer" containerID="b1122b7363efb2c3f51a15a5779c7778b950ebde41c8ea4640584635fdf06e8f" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.108505 4740 scope.go:117] "RemoveContainer" containerID="b8d8df7a4ac3106e08eff1e473c10fdc358e20ceadcee7139973a904e19f8b91" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.393994 4740 scope.go:117] "RemoveContainer" containerID="c410381d7374eff2277f42e0cd7ca5c44964d1eb3335c6251f192d7b0d2e3b6a" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.649825 4740 scope.go:117] "RemoveContainer" containerID="126ea531f9d0f4e123ef2ed666501765655bc53f3a4c471202c3b01c12320210" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.775170 4740 generic.go:334] "Generic (PLEG): container finished" podID="40882b0a-c73f-4936-83f9-8bef1774c356" containerID="6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" exitCode=0 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.775238 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784296 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"fc964d0b49183b12cb5186f46589a16fcb1022c5b54450dc53f1b5f39c5c49ee"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d8c67b945-9qhdf" event={"ID":"2d2e1871-02f7-4ff9-9987-054bf39f4418","Type":"ContainerStarted","Data":"4c51adb2e080a37af373d8805aa1044901dd6491b487202c4d5d418cae7c73d7"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.784634 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827694 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" exitCode=0 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827727 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" exitCode=2 Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.827847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.858242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.881538 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d8c67b945-9qhdf" podStartSLOduration=4.881506406 podStartE2EDuration="4.881506406s" podCreationTimestamp="2026-02-16 13:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:36.88064677 +0000 UTC m=+1064.256995491" watchObservedRunningTime="2026-02-16 13:10:36.881506406 +0000 UTC m=+1064.257855127" Feb 16 13:10:36 crc kubenswrapper[4740]: I0216 13:10:36.945986 4740 scope.go:117] "RemoveContainer" containerID="217a126c559956ab646410696f08329f76a43596d698885dcc0d56ddc65d5b42" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.088010 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254004 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254345 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254430 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.254524 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") pod \"40882b0a-c73f-4936-83f9-8bef1774c356\" (UID: \"40882b0a-c73f-4936-83f9-8bef1774c356\") " Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.255201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs" (OuterVolumeSpecName: "logs") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.261539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.261600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr" (OuterVolumeSpecName: "kube-api-access-2bdlr") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "kube-api-access-2bdlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.287854 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.299022 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" path="/var/lib/kubelet/pods/29b02cf4-52ac-4f68-a0de-83f62949ce16/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.299946 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" path="/var/lib/kubelet/pods/3cd0546c-4e67-40e3-93c1-1aee20e6df48/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.300887 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" path="/var/lib/kubelet/pods/fbc73a16-685a-4912-bec0-407ef2c7d3e9/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.302215 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" path="/var/lib/kubelet/pods/fd15191d-cc73-4274-b185-d3572e5deac0/volumes" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.323318 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data" (OuterVolumeSpecName: "config-data") pod "40882b0a-c73f-4936-83f9-8bef1774c356" (UID: "40882b0a-c73f-4936-83f9-8bef1774c356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357139 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357171 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40882b0a-c73f-4936-83f9-8bef1774c356-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357181 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357192 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40882b0a-c73f-4936-83f9-8bef1774c356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.357201 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdlr\" (UniqueName: \"kubernetes.io/projected/40882b0a-c73f-4936-83f9-8bef1774c356-kube-api-access-2bdlr\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.400894 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.400942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.867918 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" containerID="cri-o://3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" gracePeriod=30 Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.868781 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerStarted","Data":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.869713 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.868893 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" containerID="cri-o://aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" gracePeriod=30 Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.876958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58b5794cfd-4trjb" event={"ID":"40882b0a-c73f-4936-83f9-8bef1774c356","Type":"ContainerDied","Data":"302a4fe2d789df7c6696f0d0599ddcf1f3c215a4ff9751a1692adaad334ff853"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.877011 4740 scope.go:117] "RemoveContainer" containerID="6f1ddfc5faccbe8680ed49109bf97fb17cef2193d7564f21065e43146dcd96d0" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.877132 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58b5794cfd-4trjb" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.887345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerStarted","Data":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.888547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.890058 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.890046321 podStartE2EDuration="4.890046321s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:37.883247546 +0000 UTC m=+1065.259596267" watchObservedRunningTime="2026-02-16 13:10:37.890046321 +0000 UTC m=+1065.266395042" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.906007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5"} Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.923521 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" podStartSLOduration=4.923503926 podStartE2EDuration="4.923503926s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:37.916865706 +0000 UTC m=+1065.293214427" watchObservedRunningTime="2026-02-16 13:10:37.923503926 +0000 UTC m=+1065.299852647" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.932621 4740 scope.go:117] "RemoveContainer" containerID="fcdcb5b22e8cc76adbbe1f757da3d2bf7c1e93bb52cf022c637fac80081d1283" Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.958906 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:37 crc kubenswrapper[4740]: I0216 13:10:37.976924 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58b5794cfd-4trjb"] Feb 16 13:10:38 crc kubenswrapper[4740]: E0216 13:10:38.319456 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37375845_ac13_48bc_a134_c8fdc01e4242.slice/crio-aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37375845_ac13_48bc_a134_c8fdc01e4242.slice/crio-conmon-aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.377695 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.377789 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.382025 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.655257 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792654 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792821 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.792892 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") pod \"37375845-ac13-48bc-a134-c8fdc01e4242\" (UID: \"37375845-ac13-48bc-a134-c8fdc01e4242\") " Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs" (OuterVolumeSpecName: "logs") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793185 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793959 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37375845-ac13-48bc-a134-c8fdc01e4242-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.793985 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37375845-ac13-48bc-a134-c8fdc01e4242-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.799972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts" (OuterVolumeSpecName: "scripts") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.800062 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx" (OuterVolumeSpecName: "kube-api-access-vqbvx") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "kube-api-access-vqbvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.808064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.831967 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899187 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqbvx\" (UniqueName: \"kubernetes.io/projected/37375845-ac13-48bc-a134-c8fdc01e4242-kube-api-access-vqbvx\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899238 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899249 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.899257 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.911200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data" (OuterVolumeSpecName: "config-data") pod "37375845-ac13-48bc-a134-c8fdc01e4242" (UID: "37375845-ac13-48bc-a134-c8fdc01e4242"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943399 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.943538 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948427 4740 generic.go:334] "Generic (PLEG): container finished" podID="37375845-ac13-48bc-a134-c8fdc01e4242" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" exitCode=0 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948469 4740 generic.go:334] "Generic (PLEG): container finished" podID="37375845-ac13-48bc-a134-c8fdc01e4242" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" exitCode=143 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948557 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.948586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37375845-ac13-48bc-a134-c8fdc01e4242","Type":"ContainerDied","Data":"aac1339f667ba287f535c7aecd906d0a53a56e248b1a5a8045462961130dcc2b"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.976941 4740 generic.go:334] "Generic (PLEG): container finished" podID="e23974e9-800c-4295-8f84-89b4052280cd" containerID="1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" exitCode=0 Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.977006 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7"} Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.977059 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:38 crc kubenswrapper[4740]: I0216 13:10:38.989902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerStarted","Data":"06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd"} Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.018507 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.035575 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37375845-ac13-48bc-a134-c8fdc01e4242-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.042920 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.049591 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050029 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050045 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050062 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050086 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050094 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050107 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050114 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050134 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050143 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050162 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050169 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050182 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050191 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050204 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050212 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050227 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050235 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050255 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050263 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050273 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="init" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050279 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="init" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.050290 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050298 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050538 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050558 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050574 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc73a16-685a-4912-bec0-407ef2c7d3e9" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050586 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050598 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050612 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050625 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b02cf4-52ac-4f68-a0de-83f62949ce16" containerName="horizon-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050639 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd15191d-cc73-4274-b185-d3572e5deac0" containerName="neutron-httpd" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050657 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd0546c-4e67-40e3-93c1-1aee20e6df48" containerName="dnsmasq-dns" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050666 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" containerName="cinder-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.050675 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" containerName="barbican-api-log" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.051784 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.052285 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.054961 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055096 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} err="failed to get container status \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055171 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: E0216 13:10:39.055695 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055741 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} err="failed to get container status \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.055779 4740 scope.go:117] "RemoveContainer" containerID="aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056450 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31"} err="failed to get container status \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": rpc error: code = NotFound desc = could not find container \"aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31\": container with ID starting with aa0bb4298fb5978f6950ae0d16bcb6953a8ac96e2510dff474f43bf114376e31 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056489 4740 scope.go:117] "RemoveContainer" containerID="3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.056708 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94"} err="failed to get container status \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": rpc error: code = NotFound desc = could not find container \"3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94\": container with ID starting with 3951af7833d2f6cdca100d02801d423a51176867348904bb948023eb2b2bfe94 not found: ID does not exist" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060266 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060522 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.060702 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.062343 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.188106419 podStartE2EDuration="6.062321989s" podCreationTimestamp="2026-02-16 13:10:33 +0000 UTC" firstStartedPulling="2026-02-16 13:10:35.570844223 +0000 UTC m=+1062.947192944" lastFinishedPulling="2026-02-16 13:10:36.445059793 +0000 UTC m=+1063.821408514" observedRunningTime="2026-02-16 13:10:39.030696522 +0000 UTC m=+1066.407045253" watchObservedRunningTime="2026-02-16 13:10:39.062321989 +0000 UTC m=+1066.438670710" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.096586 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.115090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.136994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137283 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.137647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239894 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.239972 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240065 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240096 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") pod \"e23974e9-800c-4295-8f84-89b4052280cd\" (UID: \"e23974e9-800c-4295-8f84-89b4052280cd\") " Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240848 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.240952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241340 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241455 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241548 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.241648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.247117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcc53865-a327-4f02-a908-f0b97ae1e2c2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.247445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc53865-a327-4f02-a908-f0b97ae1e2c2-logs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248484 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m" (OuterVolumeSpecName: "kube-api-access-4ks2m") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "kube-api-access-4ks2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.248632 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts" (OuterVolumeSpecName: "scripts") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.249093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-scripts\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.251688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.260052 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.261261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-config-data\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.263179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc53865-a327-4f02-a908-f0b97ae1e2c2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.263566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bx4f\" (UniqueName: \"kubernetes.io/projected/fcc53865-a327-4f02-a908-f0b97ae1e2c2-kube-api-access-7bx4f\") pod \"cinder-api-0\" (UID: \"fcc53865-a327-4f02-a908-f0b97ae1e2c2\") " pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.293789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.296067 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37375845-ac13-48bc-a134-c8fdc01e4242" path="/var/lib/kubelet/pods/37375845-ac13-48bc-a134-c8fdc01e4242/volumes" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.297223 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40882b0a-c73f-4936-83f9-8bef1774c356" path="/var/lib/kubelet/pods/40882b0a-c73f-4936-83f9-8bef1774c356/volumes" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.316252 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.342996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data" (OuterVolumeSpecName: "config-data") pod "e23974e9-800c-4295-8f84-89b4052280cd" (UID: "e23974e9-800c-4295-8f84-89b4052280cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.343805 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.343999 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344119 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344235 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ks2m\" (UniqueName: \"kubernetes.io/projected/e23974e9-800c-4295-8f84-89b4052280cd-kube-api-access-4ks2m\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344368 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e23974e9-800c-4295-8f84-89b4052280cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.344533 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e23974e9-800c-4295-8f84-89b4052280cd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.408702 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.480370 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.538599 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56b9fd8c4d-crftf" Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.618488 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:10:39 crc kubenswrapper[4740]: I0216 13:10:39.934540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.001056 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"1bbd648bd07fb0718d2c22db45e34861633f97eab0f6c26f66c44193d285e767"} Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.004986 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e23974e9-800c-4295-8f84-89b4052280cd","Type":"ContainerDied","Data":"b5b6803f18aed28cfffe176c666815d2af3e0f9085dbb6960a1b2061739c2efb"} Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006080 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" containerID="cri-o://450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" gracePeriod=30 Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006096 4740 scope.go:117] "RemoveContainer" containerID="d6c337d79d1d7c6f98bfa582dc6465924b871bb6c4172b522fdd68735c26fd31" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.006174 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" containerID="cri-o://e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" gracePeriod=30 Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.046091 4740 scope.go:117] "RemoveContainer" containerID="87e47563591dddda2edb2ddfefa2d9a009d1939cf33ba162cc9aca53305f604c" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.104089 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.123449 4740 scope.go:117] "RemoveContainer" containerID="1688b3ca22762e9c4762a55031e5640c60db4ffb9816f6edaf523dcfadd2c0a7" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.123630 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.135723 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136200 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136227 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136259 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136270 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: E0216 13:10:40.136290 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136298 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136523 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="ceilometer-notification-agent" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136560 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="proxy-httpd" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.136577 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23974e9-800c-4295-8f84-89b4052280cd" containerName="sg-core" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.139016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.142075 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.142311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.156550 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.266800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368428 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368674 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.368712 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.370531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.370592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.375431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.382073 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.384843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.388965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.389893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"ceilometer-0\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.472650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:40 crc kubenswrapper[4740]: I0216 13:10:40.944319 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:40 crc kubenswrapper[4740]: W0216 13:10:40.951698 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66639a68_b84e_4e5f_be92_a3a8f9b7a0fc.slice/crio-b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c WatchSource:0}: Error finding container b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c: Status 404 returned error can't find the container with id b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.016325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"d5353b29dea4f1a0aef8ab1e454a5ec26d2e90bcc58d89b9c73b48f13f41d506"} Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.018294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c"} Feb 16 13:10:41 crc kubenswrapper[4740]: I0216 13:10:41.291845 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23974e9-800c-4295-8f84-89b4052280cd" path="/var/lib/kubelet/pods/e23974e9-800c-4295-8f84-89b4052280cd/volumes" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.032539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcc53865-a327-4f02-a908-f0b97ae1e2c2","Type":"ContainerStarted","Data":"f8b741c091a93f5638e3803332bc679afdde3e39b144866db437b22c6c5b3b08"} Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.034906 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.035970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb"} Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.073134 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.073111336 podStartE2EDuration="3.073111336s" podCreationTimestamp="2026-02-16 13:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:42.0599015 +0000 UTC m=+1069.436250231" watchObservedRunningTime="2026-02-16 13:10:42.073111336 +0000 UTC m=+1069.449460057" Feb 16 13:10:42 crc kubenswrapper[4740]: I0216 13:10:42.830228 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5cc7d69b6f-dmv77" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.054993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17"} Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.547220 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.817195 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.846784 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.866945 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.959956 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-758758df44-4g6db" Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.980655 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:43 crc kubenswrapper[4740]: I0216 13:10:43.981157 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" containerID="cri-o://b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" gracePeriod=10 Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.065380 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" exitCode=0 Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.065445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.067558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf"} Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.135500 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.523526 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.658499 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.660038 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.662405 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.662553 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mhlrx" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.668240 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.676213 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.716544 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.781772 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.781940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782091 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782114 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782195 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") pod \"bc2fd3ee-2093-4adb-a7af-23d05c718429\" (UID: \"bc2fd3ee-2093-4adb-a7af-23d05c718429\") " Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.782682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.808177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97" (OuterVolumeSpecName: "kube-api-access-x8h97") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "kube-api-access-x8h97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883794 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.883952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.884061 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8h97\" (UniqueName: \"kubernetes.io/projected/bc2fd3ee-2093-4adb-a7af-23d05c718429-kube-api-access-x8h97\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.884873 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.886661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.891569 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.893554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.906449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f78f448-6577-48d1-b077-01e42c14758c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.909650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5ld\" (UniqueName: \"kubernetes.io/projected/4f78f448-6577-48d1-b077-01e42c14758c-kube-api-access-pr5ld\") pod \"openstackclient\" (UID: \"4f78f448-6577-48d1-b077-01e42c14758c\") " pod="openstack/openstackclient" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.929616 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.952460 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.979201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config" (OuterVolumeSpecName: "config") pod "bc2fd3ee-2093-4adb-a7af-23d05c718429" (UID: "bc2fd3ee-2093-4adb-a7af-23d05c718429"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992043 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992280 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992369 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992444 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:44 crc kubenswrapper[4740]: I0216 13:10:44.992511 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc2fd3ee-2093-4adb-a7af-23d05c718429-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080004 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" exitCode=0 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080024 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" event={"ID":"bc2fd3ee-2093-4adb-a7af-23d05c718429","Type":"ContainerDied","Data":"ee39dbad8b099c7103531639b41565947f88a2ed618018750ab7657614220f4a"} Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080465 4740 scope.go:117] "RemoveContainer" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080061 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-qxpg7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.080795 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.081324 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" containerID="cri-o://7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" gracePeriod=30 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.081404 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" containerID="cri-o://06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" gracePeriod=30 Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.121738 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.129646 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-qxpg7"] Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.132854 4740 scope.go:117] "RemoveContainer" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.164104 4740 scope.go:117] "RemoveContainer" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: E0216 13:10:45.173291 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": container with ID starting with b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd not found: ID does not exist" containerID="b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.173354 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd"} err="failed to get container status \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": rpc error: code = NotFound desc = could not find container \"b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd\": container with ID starting with b8d7c75c234ad1900a90959586c4b6bfcccbde4506a73ea5191991a1c6f034cd not found: ID does not exist" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.173386 4740 scope.go:117] "RemoveContainer" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: E0216 13:10:45.180335 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": container with ID starting with 2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7 not found: ID does not exist" containerID="2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.180395 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7"} err="failed to get container status \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": rpc error: code = NotFound desc = could not find container \"2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7\": container with ID starting with 2f0ebe12ea6ad1439a1954ccb53e533cf75e7220d22735077428a0ae75db29c7 not found: ID does not exist" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.316842 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" path="/var/lib/kubelet/pods/bc2fd3ee-2093-4adb-a7af-23d05c718429/volumes" Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.575162 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.575548 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:10:45 crc kubenswrapper[4740]: W0216 13:10:45.633483 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f78f448_6577_48d1_b077_01e42c14758c.slice/crio-8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa WatchSource:0}: Error finding container 8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa: Status 404 returned error can't find the container with id 8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa Feb 16 13:10:45 crc kubenswrapper[4740]: I0216 13:10:45.635074 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.100430 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f78f448-6577-48d1-b077-01e42c14758c","Type":"ContainerStarted","Data":"8bee62cb9170bfef2be256ac1989255d38f5375f97f5a63d49f8f3a0eabeb3fa"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.105199 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerStarted","Data":"dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.107529 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.117354 4740 generic.go:334] "Generic (PLEG): container finished" podID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerID="06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" exitCode=0 Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.117424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd"} Feb 16 13:10:46 crc kubenswrapper[4740]: I0216 13:10:46.143614 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.318937159 podStartE2EDuration="6.143598022s" podCreationTimestamp="2026-02-16 13:10:40 +0000 UTC" firstStartedPulling="2026-02-16 13:10:40.954935244 +0000 UTC m=+1068.331283965" lastFinishedPulling="2026-02-16 13:10:44.779596107 +0000 UTC m=+1072.155944828" observedRunningTime="2026-02-16 13:10:46.139968377 +0000 UTC m=+1073.516317118" watchObservedRunningTime="2026-02-16 13:10:46.143598022 +0000 UTC m=+1073.519946743" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.115927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:49 crc kubenswrapper[4740]: E0216 13:10:49.117277 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="init" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117314 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="init" Feb 16 13:10:49 crc kubenswrapper[4740]: E0216 13:10:49.117362 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117371 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.117562 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fd3ee-2093-4adb-a7af-23d05c718429" containerName="dnsmasq-dns" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.118915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.122727 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.122953 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.123060 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168729 4740 generic.go:334] "Generic (PLEG): container finished" podID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerID="7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" exitCode=0 Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168826 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.168860 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5"} Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185298 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.185790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287826 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287941 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.287967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.288546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-log-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.289803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fae3001c-021f-4f48-860e-0893978fafaa-run-httpd\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.295517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-combined-ca-bundle\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.297571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-public-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.309847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-etc-swift\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.312777 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-config-data\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.314264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae3001c-021f-4f48-860e-0893978fafaa-internal-tls-certs\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.315397 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmkpb\" (UniqueName: \"kubernetes.io/projected/fae3001c-021f-4f48-860e-0893978fafaa-kube-api-access-lmkpb\") pod \"swift-proxy-6d4b8b747f-tcdvw\" (UID: \"fae3001c-021f-4f48-860e-0893978fafaa\") " pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.454452 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:49 crc kubenswrapper[4740]: I0216 13:10:49.954961 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.123940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.123996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.124172 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") pod \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\" (UID: \"a8fb04c0-6e01-4174-93b2-195dea7f96b6\") " Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.125969 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.137958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts" (OuterVolumeSpecName: "scripts") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.139488 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h" (OuterVolumeSpecName: "kube-api-access-wf65h") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "kube-api-access-wf65h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.165545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181253 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a8fb04c0-6e01-4174-93b2-195dea7f96b6","Type":"ContainerDied","Data":"a30f8e0a38b39c3a533a61933daccfe7c9ac4a55dac7d23ebbd3bc31afe612c4"} Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181795 4740 scope.go:117] "RemoveContainer" containerID="06d1d7b93be02f29dc72db7cbecd2c0acd7f9cd6914e145816221b024786f6bd" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.181960 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.221114 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226350 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8fb04c0-6e01-4174-93b2-195dea7f96b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226390 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226401 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf65h\" (UniqueName: \"kubernetes.io/projected/a8fb04c0-6e01-4174-93b2-195dea7f96b6-kube-api-access-wf65h\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226410 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.226565 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: W0216 13:10:50.254116 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae3001c_021f_4f48_860e_0893978fafaa.slice/crio-3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6 WatchSource:0}: Error finding container 3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6: Status 404 returned error can't find the container with id 3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6 Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.256144 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d4b8b747f-tcdvw"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.271715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data" (OuterVolumeSpecName: "config-data") pod "a8fb04c0-6e01-4174-93b2-195dea7f96b6" (UID: "a8fb04c0-6e01-4174-93b2-195dea7f96b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.328364 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb04c0-6e01-4174-93b2-195dea7f96b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.391211 4740 scope.go:117] "RemoveContainer" containerID="7fbcc8a8bd921bc8214ceccbe3647e93cff6d90abe01c523347b0aea8fa73dd5" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.524138 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.536181 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545187 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: E0216 13:10:50.545579 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545596 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: E0216 13:10:50.545627 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545634 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545794 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="cinder-scheduler" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.545821 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" containerName="probe" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.546736 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.557229 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.566205 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.632993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.633742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.735770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.736866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.736934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.737182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.738028 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8cc77810-2df3-4a51-8429-326b706d2388-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.742382 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-scripts\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.743207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc77810-2df3-4a51-8429-326b706d2388-config-data\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.756163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjvcl\" (UniqueName: \"kubernetes.io/projected/8cc77810-2df3-4a51-8429-326b706d2388-kube-api-access-cjvcl\") pod \"cinder-scheduler-0\" (UID: \"8cc77810-2df3-4a51-8429-326b706d2388\") " pod="openstack/cinder-scheduler-0" Feb 16 13:10:50 crc kubenswrapper[4740]: I0216 13:10:50.875913 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.322828 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fb04c0-6e01-4174-93b2-195dea7f96b6" path="/var/lib/kubelet/pods/a8fb04c0-6e01-4174-93b2-195dea7f96b6/volumes" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"f09e821fc0a96094228dd18b50d115c68fbabd0942c18bd405fd6bea81a0106b"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"14b07196c52c4c95ce61a63376ccd3d53a29dcf74391576e8a665ef8515c5106"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328355 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.328437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" event={"ID":"fae3001c-021f-4f48-860e-0893978fafaa","Type":"ContainerStarted","Data":"3278858c175dfda85f13bf618b30958764fe1971dbc60d752ebb3b423760c7b6"} Feb 16 13:10:51 crc kubenswrapper[4740]: I0216 13:10:51.383730 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" podStartSLOduration=2.383710261 podStartE2EDuration="2.383710261s" podCreationTimestamp="2026-02-16 13:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:10:51.362339848 +0000 UTC m=+1078.738688589" watchObservedRunningTime="2026-02-16 13:10:51.383710261 +0000 UTC m=+1078.760058982" Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.045532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.340288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"9c7afb5b2a8227b3161c2d72f392a8cfea9c6514a9a370d738c7675bf010c2cd"} Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.566798 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567423 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" containerID="cri-o://72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567661 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" containerID="cri-o://dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567730 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" containerID="cri-o://c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" gracePeriod=30 Feb 16 13:10:52 crc kubenswrapper[4740]: I0216 13:10:52.567871 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" containerID="cri-o://c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" gracePeriod=30 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.354992 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355341 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" exitCode=2 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355075 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17"} Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355356 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355439 4740 generic.go:334] "Generic (PLEG): container finished" podID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerID="72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" exitCode=0 Feb 16 13:10:53 crc kubenswrapper[4740]: I0216 13:10:53.355454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb"} Feb 16 13:10:54 crc kubenswrapper[4740]: I0216 13:10:54.524962 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.693172 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.694857 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.731370 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.780367 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.781415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.811118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831264 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831727 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.831989 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.886085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.892471 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.907883 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.909501 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.919680 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.924059 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.929934 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.934448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.935289 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.935843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.985133 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"nova-cell0-db-create-ctmrz\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:56 crc kubenswrapper[4740]: I0216 13:10:56.987116 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"nova-api-db-create-r46m7\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.032941 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036052 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.036572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.092264 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.093774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.100614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.102299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.107276 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139157 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139458 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.139752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.140961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.141195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.185571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"nova-cell1-db-create-jfqz9\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.186976 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"nova-api-877a-account-create-update-w87w5\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.225507 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.240091 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.241422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.241507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.242121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.258413 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"nova-cell0-7f98-account-create-update-77pz4\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.299854 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.301586 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.304128 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.322632 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.343147 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.343366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.419005 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.445887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.466855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"nova-cell1-9f9b-account-create-update-rc9td\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:57 crc kubenswrapper[4740]: I0216 13:10:57.622980 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:10:58 crc kubenswrapper[4740]: I0216 13:10:58.948210 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124378 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.124955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125283 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") pod \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\" (UID: \"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc\") " Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.125315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.126005 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.134319 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb" (OuterVolumeSpecName: "kube-api-access-ztrfb") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "kube-api-access-ztrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.151883 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts" (OuterVolumeSpecName: "scripts") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.197580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.238992 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239425 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239520 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239550 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.239597 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrfb\" (UniqueName: \"kubernetes.io/projected/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-kube-api-access-ztrfb\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.349968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data" (OuterVolumeSpecName: "config-data") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375869 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375913 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.375933 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.376864 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" (UID: "66639a68-b84e-4e5f-be92-a3a8f9b7a0fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.417030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerStarted","Data":"6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66639a68-b84e-4e5f-be92-a3a8f9b7a0fc","Type":"ContainerDied","Data":"b433ec71b31aae836b3d29a5d16e7a01c58ca4844f4917afcf82559a17ea291c"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420724 4740 scope.go:117] "RemoveContainer" containerID="dbc33c2aa4005cf8d6522a704d76dbdcfddf6a4b426cc7f7ac7e835b25c92d9b" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.420859 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.422890 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerStarted","Data":"fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.432956 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerStarted","Data":"585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.441816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f78f448-6577-48d1-b077-01e42c14758c","Type":"ContainerStarted","Data":"20f4a8e57e76cc360b7850b5367d2684c122f4c9c4b3092787b94b72265cf7db"} Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.446459 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.446513 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.457922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.472387 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.708924209 podStartE2EDuration="15.472365912s" podCreationTimestamp="2026-02-16 13:10:44 +0000 UTC" firstStartedPulling="2026-02-16 13:10:45.636862201 +0000 UTC m=+1073.013210922" lastFinishedPulling="2026-02-16 13:10:58.400303894 +0000 UTC m=+1085.776652625" observedRunningTime="2026-02-16 13:10:59.464590537 +0000 UTC m=+1086.840939258" watchObservedRunningTime="2026-02-16 13:10:59.472365912 +0000 UTC m=+1086.848714633" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.478737 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.485135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d4b8b747f-tcdvw" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.504557 4740 scope.go:117] "RemoveContainer" containerID="c7fc77aa58de346a51958c33789668e734927d1f7118df0ac708504154d9cccf" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.559849 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.585714 4740 scope.go:117] "RemoveContainer" containerID="c64596e74d389c4963c6362a5444417dd797ea07c0e4610fbcc271c462d5fb17" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.608392 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.625478 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626445 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626467 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626477 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626499 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626505 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: E0216 13:10:59.626522 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626530 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626709 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-central-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626725 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="sg-core" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626735 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="proxy-httpd" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.626745 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" containerName="ceilometer-notification-agent" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.628767 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.633352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.633575 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.653201 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.667060 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.679217 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.735660 4740 scope.go:117] "RemoveContainer" containerID="72382a87f5e887b7907f799c275003271fd1fac55b0506ca9e68aa7d6e52a8eb" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753317 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.753352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.854974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.855078 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.861503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.861840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.863734 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.865095 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.871782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.881697 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:10:59 crc kubenswrapper[4740]: I0216 13:10:59.890626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"ceilometer-0\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " pod="openstack/ceilometer-0" Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.137312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466340 4740 generic.go:334] "Generic (PLEG): container finished" podID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerID="20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466767 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerDied","Data":"20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.466796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerStarted","Data":"c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.473429 4740 generic.go:334] "Generic (PLEG): container finished" podID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerID="344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.473481 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerDied","Data":"344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.478587 4740 generic.go:334] "Generic (PLEG): container finished" podID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerID="4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.478663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerDied","Data":"4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488336 4740 generic.go:334] "Generic (PLEG): container finished" podID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerID="04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerDied","Data":"04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.488457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerStarted","Data":"1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502415 4740 generic.go:334] "Generic (PLEG): container finished" podID="0bf48619-6b39-4215-950a-f8da809dcc11" containerID="d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502565 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerDied","Data":"d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.502589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerStarted","Data":"264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.509310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"5f5c1f53d7e962d3996e4cae85f99250cc4b43ee0e746058e8d85d62d6ec4a0d"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.528116 4740 generic.go:334] "Generic (PLEG): container finished" podID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerID="04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569" exitCode=0 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.529270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerDied","Data":"04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569"} Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.600309 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.600568 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" containerID="cri-o://a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" gracePeriod=30 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.601023 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" containerID="cri-o://78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" gracePeriod=30 Feb 16 13:11:00 crc kubenswrapper[4740]: I0216 13:11:00.712378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.296617 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66639a68-b84e-4e5f-be92-a3a8f9b7a0fc" path="/var/lib/kubelet/pods/66639a68-b84e-4e5f-be92-a3a8f9b7a0fc/volumes" Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.543846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.544223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.545631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8cc77810-2df3-4a51-8429-326b706d2388","Type":"ContainerStarted","Data":"d5a1cd3c9bdd8394f36c620fccd25a64965afba1d80863bfc2e95f488a0fc03e"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.551259 4740 generic.go:334] "Generic (PLEG): container finished" podID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" exitCode=143 Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.551407 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} Feb 16 13:11:01 crc kubenswrapper[4740]: I0216 13:11:01.573471 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.57344775 podStartE2EDuration="11.57344775s" podCreationTimestamp="2026-02-16 13:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:01.569271789 +0000 UTC m=+1088.945620510" watchObservedRunningTime="2026-02-16 13:11:01.57344775 +0000 UTC m=+1088.949796481" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.168176 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.322078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") pod \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.322631 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") pod \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\" (UID: \"2dc528c1-14c9-4bb4-a6f8-621fc066e98a\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.324250 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2dc528c1-14c9-4bb4-a6f8-621fc066e98a" (UID: "2dc528c1-14c9-4bb4-a6f8-621fc066e98a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.335136 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722" (OuterVolumeSpecName: "kube-api-access-b4722") pod "2dc528c1-14c9-4bb4-a6f8-621fc066e98a" (UID: "2dc528c1-14c9-4bb4-a6f8-621fc066e98a"). InnerVolumeSpecName "kube-api-access-b4722". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.424839 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4722\" (UniqueName: \"kubernetes.io/projected/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-kube-api-access-b4722\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.424875 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2dc528c1-14c9-4bb4-a6f8-621fc066e98a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.475514 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.483495 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.486710 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.491280 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.514429 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568596 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctmrz" event={"ID":"0bf48619-6b39-4215-950a-f8da809dcc11","Type":"ContainerDied","Data":"264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568632 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264bfa432ce913384c33b9a2803385353bcb4dacbf157ec9caac21b48465e46c" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.568685 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctmrz" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.571667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573585 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jfqz9" event={"ID":"b93273db-db1d-4c4b-85ad-2d87065c42f4","Type":"ContainerDied","Data":"c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573608 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75c45c1f77c0a9c8fd12b4ba2ae2d39b749889c83e394f783874aac9c69b1c4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.573646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jfqz9" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.574931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-r46m7" event={"ID":"ce83ec9b-39d5-4bf9-b343-d3f06f886841","Type":"ContainerDied","Data":"6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.575045 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-r46m7" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.574952 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aac4868b94b8fa997c6ab3355c8f6f2ce451f6b914ddb3f25c66b60eb51c555" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576337 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" event={"ID":"e2ec561b-87d9-418d-9376-c48bb31d46f9","Type":"ContainerDied","Data":"1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576360 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1990ff02490a5ac4c3ee758f612e5e6e28b9ec1836380921261354160ecbce9d" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.576400 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f98-account-create-update-77pz4" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577477 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" event={"ID":"2dc528c1-14c9-4bb4-a6f8-621fc066e98a","Type":"ContainerDied","Data":"fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577497 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1d94585cbf542c4a82daaa8bc901a3a032869c85dcc2de75ed3559a0672a2b" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.577531 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9f9b-account-create-update-rc9td" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583277 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-877a-account-create-update-w87w5" event={"ID":"c28029f1-eca0-4cd5-95b3-774c21d6d0ed","Type":"ContainerDied","Data":"585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3"} Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583301 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="585e016e2bc88ce1b0b163fd4e4601844702a5701f75316cf007c1ee55969af3" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.583283 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-877a-account-create-update-w87w5" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630277 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") pod \"b93273db-db1d-4c4b-85ad-2d87065c42f4\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") pod \"e2ec561b-87d9-418d-9376-c48bb31d46f9\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") pod \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630453 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") pod \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630481 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") pod \"0bf48619-6b39-4215-950a-f8da809dcc11\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630509 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") pod \"0bf48619-6b39-4215-950a-f8da809dcc11\" (UID: \"0bf48619-6b39-4215-950a-f8da809dcc11\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") pod \"b93273db-db1d-4c4b-85ad-2d87065c42f4\" (UID: \"b93273db-db1d-4c4b-85ad-2d87065c42f4\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630616 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") pod \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\" (UID: \"c28029f1-eca0-4cd5-95b3-774c21d6d0ed\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") pod \"e2ec561b-87d9-418d-9376-c48bb31d46f9\" (UID: \"e2ec561b-87d9-418d-9376-c48bb31d46f9\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.630681 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") pod \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\" (UID: \"ce83ec9b-39d5-4bf9-b343-d3f06f886841\") " Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.631941 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce83ec9b-39d5-4bf9-b343-d3f06f886841" (UID: "ce83ec9b-39d5-4bf9-b343-d3f06f886841"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.632584 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b93273db-db1d-4c4b-85ad-2d87065c42f4" (UID: "b93273db-db1d-4c4b-85ad-2d87065c42f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.633071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2ec561b-87d9-418d-9376-c48bb31d46f9" (UID: "e2ec561b-87d9-418d-9376-c48bb31d46f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.634684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c28029f1-eca0-4cd5-95b3-774c21d6d0ed" (UID: "c28029f1-eca0-4cd5-95b3-774c21d6d0ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.635520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bf48619-6b39-4215-950a-f8da809dcc11" (UID: "0bf48619-6b39-4215-950a-f8da809dcc11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.639991 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs" (OuterVolumeSpecName: "kube-api-access-8c5xs") pod "b93273db-db1d-4c4b-85ad-2d87065c42f4" (UID: "b93273db-db1d-4c4b-85ad-2d87065c42f4"). InnerVolumeSpecName "kube-api-access-8c5xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.639998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r" (OuterVolumeSpecName: "kube-api-access-6kh9r") pod "0bf48619-6b39-4215-950a-f8da809dcc11" (UID: "0bf48619-6b39-4215-950a-f8da809dcc11"). InnerVolumeSpecName "kube-api-access-6kh9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640081 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p" (OuterVolumeSpecName: "kube-api-access-7nr5p") pod "ce83ec9b-39d5-4bf9-b343-d3f06f886841" (UID: "ce83ec9b-39d5-4bf9-b343-d3f06f886841"). InnerVolumeSpecName "kube-api-access-7nr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z" (OuterVolumeSpecName: "kube-api-access-2pn2z") pod "c28029f1-eca0-4cd5-95b3-774c21d6d0ed" (UID: "c28029f1-eca0-4cd5-95b3-774c21d6d0ed"). InnerVolumeSpecName "kube-api-access-2pn2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.640172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj" (OuterVolumeSpecName: "kube-api-access-n4frj") pod "e2ec561b-87d9-418d-9376-c48bb31d46f9" (UID: "e2ec561b-87d9-418d-9376-c48bb31d46f9"). InnerVolumeSpecName "kube-api-access-n4frj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734732 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734760 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4frj\" (UniqueName: \"kubernetes.io/projected/e2ec561b-87d9-418d-9376-c48bb31d46f9-kube-api-access-n4frj\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734770 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce83ec9b-39d5-4bf9-b343-d3f06f886841-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734779 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b93273db-db1d-4c4b-85ad-2d87065c42f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734789 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2ec561b-87d9-418d-9376-c48bb31d46f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734797 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nr5p\" (UniqueName: \"kubernetes.io/projected/ce83ec9b-39d5-4bf9-b343-d3f06f886841-kube-api-access-7nr5p\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734825 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pn2z\" (UniqueName: \"kubernetes.io/projected/c28029f1-eca0-4cd5-95b3-774c21d6d0ed-kube-api-access-2pn2z\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734845 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kh9r\" (UniqueName: \"kubernetes.io/projected/0bf48619-6b39-4215-950a-f8da809dcc11-kube-api-access-6kh9r\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734859 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bf48619-6b39-4215-950a-f8da809dcc11-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.734871 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c5xs\" (UniqueName: \"kubernetes.io/projected/b93273db-db1d-4c4b-85ad-2d87065c42f4-kube-api-access-8c5xs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.821354 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d8c67b945-9qhdf" Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894211 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894446 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84f8c7948d-wxf52" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" containerID="cri-o://1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" gracePeriod=30 Feb 16 13:11:02 crc kubenswrapper[4740]: I0216 13:11:02.894672 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84f8c7948d-wxf52" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" containerID="cri-o://90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.594340 4740 generic.go:334] "Generic (PLEG): container finished" podID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerID="90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" exitCode=0 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.594399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9"} Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.597009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae"} Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668413 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668675 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" containerID="cri-o://e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.668741 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" containerID="cri-o://6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" gracePeriod=30 Feb 16 13:11:03 crc kubenswrapper[4740]: I0216 13:11:03.803375 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.345149 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480447 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480582 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480650 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.480668 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") pod \"444d5830-ca5b-426e-a7da-785e35ae1e65\" (UID: \"444d5830-ca5b-426e-a7da-785e35ae1e65\") " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.481395 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs" (OuterVolumeSpecName: "logs") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.481715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.493450 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.497145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts" (OuterVolumeSpecName: "scripts") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.499017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g" (OuterVolumeSpecName: "kube-api-access-25p4g") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "kube-api-access-25p4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.524646 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5476559f6b-jvkbv" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.524781 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.528039 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.555847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data" (OuterVolumeSpecName: "config-data") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.562488 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "444d5830-ca5b-426e-a7da-785e35ae1e65" (UID: "444d5830-ca5b-426e-a7da-785e35ae1e65"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582530 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582562 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582593 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582604 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582618 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25p4g\" (UniqueName: \"kubernetes.io/projected/444d5830-ca5b-426e-a7da-785e35ae1e65-kube-api-access-25p4g\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582626 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444d5830-ca5b-426e-a7da-785e35ae1e65-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582634 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.582641 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444d5830-ca5b-426e-a7da-785e35ae1e65-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.601929 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607620 4740 generic.go:334] "Generic (PLEG): container finished" podID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" exitCode=0 Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607683 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.607701 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.608362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"444d5830-ca5b-426e-a7da-785e35ae1e65","Type":"ContainerDied","Data":"5b0e438309976f20e6cf23ad9e1052b831e4bb9fad154e2636f7ef4afee681a4"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.608402 4740 scope.go:117] "RemoveContainer" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.611386 4740 generic.go:334] "Generic (PLEG): container finished" podID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" exitCode=143 Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.611463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.684595 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.687103 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.703462 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.714629 4740 scope.go:117] "RemoveContainer" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.722678 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723345 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723361 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723371 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723381 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723396 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723403 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723411 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723418 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723438 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723445 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723456 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723465 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723485 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723492 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.723529 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723537 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723704 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723718 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-httpd" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723726 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" containerName="glance-log" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723737 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723749 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723763 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" containerName="mariadb-database-create" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723773 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.723785 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" containerName="mariadb-account-create-update" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.762943 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.767360 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.767655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.772562 4740 scope.go:117] "RemoveContainer" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.780893 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": container with ID starting with 78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b not found: ID does not exist" containerID="78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.781590 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b"} err="failed to get container status \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": rpc error: code = NotFound desc = could not find container \"78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b\": container with ID starting with 78db66073b9738beaa480aa1e936fa2e40191e6021baf4bf3bd578993e6f916b not found: ID does not exist" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.781707 4740 scope.go:117] "RemoveContainer" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.784083 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:04 crc kubenswrapper[4740]: E0216 13:11:04.792508 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": container with ID starting with a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d not found: ID does not exist" containerID="a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.792569 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d"} err="failed to get container status \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": rpc error: code = NotFound desc = could not find container \"a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d\": container with ID starting with a69cecc6abb072f2476df5f1b19a407448e49c86e9ef2331957076116e7df06d not found: ID does not exist" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.892991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893254 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893375 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893553 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893768 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.893974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.894401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.997840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998415 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.998869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:04 crc kubenswrapper[4740]: I0216 13:11:04.999499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-logs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.000175 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.002396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.011117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.011416 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.012114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.019516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.046668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpr7s\" (UniqueName: \"kubernetes.io/projected/d8535644-0ebc-4cc6-bbc5-a5ef02f30685-kube-api-access-mpr7s\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.060991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"d8535644-0ebc-4cc6-bbc5-a5ef02f30685\") " pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.103421 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.294697 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d5830-ca5b-426e-a7da-785e35ae1e65" path="/var/lib/kubelet/pods/444d5830-ca5b-426e-a7da-785e35ae1e65/volumes" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerStarted","Data":"14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40"} Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623552 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" containerID="cri-o://b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623565 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" containerID="cri-o://14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623609 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623638 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" containerID="cri-o://8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.623689 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" containerID="cri-o://915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" gracePeriod=30 Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.710973 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.788527633 podStartE2EDuration="6.710923118s" podCreationTimestamp="2026-02-16 13:10:59 +0000 UTC" firstStartedPulling="2026-02-16 13:11:00.740755612 +0000 UTC m=+1088.117104333" lastFinishedPulling="2026-02-16 13:11:04.663151097 +0000 UTC m=+1092.039499818" observedRunningTime="2026-02-16 13:11:05.645134714 +0000 UTC m=+1093.021483445" watchObservedRunningTime="2026-02-16 13:11:05.710923118 +0000 UTC m=+1093.087271839" Feb 16 13:11:05 crc kubenswrapper[4740]: W0216 13:11:05.711540 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8535644_0ebc_4cc6_bbc5_a5ef02f30685.slice/crio-779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f WatchSource:0}: Error finding container 779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f: Status 404 returned error can't find the container with id 779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.712750 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:11:05 crc kubenswrapper[4740]: I0216 13:11:05.876547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.126911 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637452 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" exitCode=0 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637785 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" exitCode=2 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637795 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" exitCode=0 Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.637935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.642350 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"fd933378fd3e26ee2a045b2089a447b28202255138a0600775787bfcc5f7843d"} Feb 16 13:11:06 crc kubenswrapper[4740]: I0216 13:11:06.642392 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"779a311802c9a6d76fb87ab9b57278280ece55af45c53f0ade4127dbdb8b482f"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.373809 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.375153 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.376644 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.376940 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.379313 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.381122 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.388179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.453944 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454025 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454141 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454167 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") pod \"ea4149c3-a18d-46e3-86b1-8a60e9127244\" (UID: \"ea4149c3-a18d-46e3-86b1-8a60e9127244\") " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454543 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.454637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.456322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.460011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs" (OuterVolumeSpecName: "logs") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.465683 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.474095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts" (OuterVolumeSpecName: "scripts") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.477788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl" (OuterVolumeSpecName: "kube-api-access-k8spl") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "kube-api-access-k8spl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.505657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.540231 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558314 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558529 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558627 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558664 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558674 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8spl\" (UniqueName: \"kubernetes.io/projected/ea4149c3-a18d-46e3-86b1-8a60e9127244-kube-api-access-k8spl\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558684 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558731 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558741 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea4149c3-a18d-46e3-86b1-8a60e9127244-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.558749 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.562173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data" (OuterVolumeSpecName: "config-data") pod "ea4149c3-a18d-46e3-86b1-8a60e9127244" (UID: "ea4149c3-a18d-46e3-86b1-8a60e9127244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.565920 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.568290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.570910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.584790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"nova-cell0-conductor-db-sync-bfmpm\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.614468 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.661366 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea4149c3-a18d-46e3-86b1-8a60e9127244-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.661407 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.674964 4740 generic.go:334] "Generic (PLEG): container finished" podID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" exitCode=0 Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675046 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea4149c3-a18d-46e3-86b1-8a60e9127244","Type":"ContainerDied","Data":"b1f805b3f42130f9ec256249b24cf294052db79347125cb78ef7bd761396a42c"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.675132 4740 scope.go:117] "RemoveContainer" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.681186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8535644-0ebc-4cc6-bbc5-a5ef02f30685","Type":"ContainerStarted","Data":"bb78e5415ed170cd65d1da7cf4ac8e474649a626d35a5fb6c378aa0900606bd5"} Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.694064 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.712293 4740 scope.go:117] "RemoveContainer" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.719535 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.719514701 podStartE2EDuration="3.719514701s" podCreationTimestamp="2026-02-16 13:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:07.702195944 +0000 UTC m=+1095.078544685" watchObservedRunningTime="2026-02-16 13:11:07.719514701 +0000 UTC m=+1095.095863422" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.731497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.751756 4740 scope.go:117] "RemoveContainer" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.756220 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": container with ID starting with 6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423 not found: ID does not exist" containerID="6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.756286 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423"} err="failed to get container status \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": rpc error: code = NotFound desc = could not find container \"6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423\": container with ID starting with 6744213408b37d21e6ccd477fa2218310b8c258c5997a9259d58c47fffa6a423 not found: ID does not exist" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.756319 4740 scope.go:117] "RemoveContainer" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.756976 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": container with ID starting with e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6 not found: ID does not exist" containerID="e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.757004 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6"} err="failed to get container status \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": rpc error: code = NotFound desc = could not find container \"e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6\": container with ID starting with e322972f7cd22f77cc704c7b82b9afc5d25b117f5c9032db245ac6cd86bd5da6 not found: ID does not exist" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.757038 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.771623 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.772044 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772062 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: E0216 13:11:07.772089 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772097 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772278 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-log" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.772297 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" containerName="glance-httpd" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.773315 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.775006 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.775527 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.787253 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864763 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864843 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864924 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.864954 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.865013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.865066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.966700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967079 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967133 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967221 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.967636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da7f67c-ce66-4f6b-b760-f2ae017599c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.969203 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.970995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.971479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.977261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.980256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da7f67c-ce66-4f6b-b760-f2ae017599c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:07 crc kubenswrapper[4740]: I0216 13:11:07.998459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg7q\" (UniqueName: \"kubernetes.io/projected/1da7f67c-ce66-4f6b-b760-f2ae017599c0-kube-api-access-ghg7q\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.015329 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"1da7f67c-ce66-4f6b-b760-f2ae017599c0\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.095572 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.254655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:11:08 crc kubenswrapper[4740]: W0216 13:11:08.256319 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fce641e_1b76_4b99_a99d_9a0ccbf9680e.slice/crio-10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd WatchSource:0}: Error finding container 10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd: Status 404 returned error can't find the container with id 10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.704001 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerStarted","Data":"10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd"} Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.707211 4740 generic.go:334] "Generic (PLEG): container finished" podID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerID="1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" exitCode=0 Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.708007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5"} Feb 16 13:11:08 crc kubenswrapper[4740]: I0216 13:11:08.774059 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.032899 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.086994 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087093 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087445 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.087496 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") pod \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\" (UID: \"4bc5b698-8fd6-4919-a02b-eb74665d83e0\") " Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.097680 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.098638 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt" (OuterVolumeSpecName: "kube-api-access-5cppt") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "kube-api-access-5cppt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.169713 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.182001 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.187692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config" (OuterVolumeSpecName: "config") pod "4bc5b698-8fd6-4919-a02b-eb74665d83e0" (UID: "4bc5b698-8fd6-4919-a02b-eb74665d83e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.197890 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198499 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cppt\" (UniqueName: \"kubernetes.io/projected/4bc5b698-8fd6-4919-a02b-eb74665d83e0-kube-api-access-5cppt\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198515 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198524 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.198532 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc5b698-8fd6-4919-a02b-eb74665d83e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.341297 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4149c3-a18d-46e3-86b1-8a60e9127244" path="/var/lib/kubelet/pods/ea4149c3-a18d-46e3-86b1-8a60e9127244/volumes" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f8c7948d-wxf52" event={"ID":"4bc5b698-8fd6-4919-a02b-eb74665d83e0","Type":"ContainerDied","Data":"1506e84718fd32b51421d0c20379f7ab72db7c5398d1bae1271890a3cd491379"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718982 4740 scope.go:117] "RemoveContainer" containerID="90b8c13105002e23bcbcbbfad6decf1c72cc959f43ea05e2caf8e9e59758b0a9" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.718601 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f8c7948d-wxf52" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.728663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"f230f1aef3a88a687ccdc2f9c83e07dc9225aedc4710c74e61723da15a524197"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.728710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"df3544372b26e6e50da341e4f25973095e526c28cdd3b97eea2ef30a428556ae"} Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.750040 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.752717 4740 scope.go:117] "RemoveContainer" containerID="1e87b39de0e646ce6ba449ee63a82579996c381dcaa6e162f4b5763dfb4523c5" Feb 16 13:11:09 crc kubenswrapper[4740]: I0216 13:11:09.762331 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84f8c7948d-wxf52"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.491045 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656664 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.656980 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657020 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657043 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.657083 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") pod \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\" (UID: \"9b0f3f50-6ea0-4ee0-af75-c020e91c8495\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.658212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs" (OuterVolumeSpecName: "logs") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.663704 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.663755 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg" (OuterVolumeSpecName: "kube-api-access-wsswg") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "kube-api-access-wsswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.696614 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts" (OuterVolumeSpecName: "scripts") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.715763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.719337 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data" (OuterVolumeSpecName: "config-data") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.728479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9b0f3f50-6ea0-4ee0-af75-c020e91c8495" (UID: "9b0f3f50-6ea0-4ee0-af75-c020e91c8495"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.745099 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1da7f67c-ce66-4f6b-b760-f2ae017599c0","Type":"ContainerStarted","Data":"001873da5d2ed97221d998867ec288d071967aab9212258303cfac59327ae318"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.749972 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" exitCode=137 Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750021 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5476559f6b-jvkbv" event={"ID":"9b0f3f50-6ea0-4ee0-af75-c020e91c8495","Type":"ContainerDied","Data":"06bf25d33138128d15c50d580ea5273787a0565c881e9530d7786cb52837cf0e"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750061 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5476559f6b-jvkbv" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.750369 4740 scope.go:117] "RemoveContainer" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756552 4740 generic.go:334] "Generic (PLEG): container finished" podID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerID="b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" exitCode=0 Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"772c2a1c-acd4-4227-829d-e4235742b5f4","Type":"ContainerDied","Data":"fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d"} Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.756626 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5f113e8cade2490d498d1de5db88d22ff7ac6d77cb5f061cb4a440e771185d" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760017 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsswg\" (UniqueName: \"kubernetes.io/projected/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-kube-api-access-wsswg\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760042 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760051 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760060 4740 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760070 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760081 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.760089 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b0f3f50-6ea0-4ee0-af75-c020e91c8495-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.770020 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.770004079 podStartE2EDuration="3.770004079s" podCreationTimestamp="2026-02-16 13:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:10.765777556 +0000 UTC m=+1098.142126277" watchObservedRunningTime="2026-02-16 13:11:10.770004079 +0000 UTC m=+1098.146352800" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.799476 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.804085 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.810957 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5476559f6b-jvkbv"] Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.922144 4740 scope.go:117] "RemoveContainer" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.943964 4740 scope.go:117] "RemoveContainer" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: E0216 13:11:10.944463 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": container with ID starting with e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8 not found: ID does not exist" containerID="e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944506 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8"} err="failed to get container status \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": rpc error: code = NotFound desc = could not find container \"e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8\": container with ID starting with e1e195ab35b71f2d24ec05bfc013231435635c39fa8e75f60323f30562f69bc8 not found: ID does not exist" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944531 4740 scope.go:117] "RemoveContainer" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: E0216 13:11:10.944947 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": container with ID starting with 450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0 not found: ID does not exist" containerID="450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.944981 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0"} err="failed to get container status \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": rpc error: code = NotFound desc = could not find container \"450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0\": container with ID starting with 450ba384bee3dfda00f91ae88bf40cfb03c435f241d411d9b95cee80d340a4f0 not found: ID does not exist" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963090 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963158 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963192 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963407 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963426 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963449 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963511 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") pod \"772c2a1c-acd4-4227-829d-e4235742b5f4\" (UID: \"772c2a1c-acd4-4227-829d-e4235742b5f4\") " Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.963734 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.964320 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.964339 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/772c2a1c-acd4-4227-829d-e4235742b5f4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.966268 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf" (OuterVolumeSpecName: "kube-api-access-ggbsf") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "kube-api-access-ggbsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:10 crc kubenswrapper[4740]: I0216 13:11:10.966611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts" (OuterVolumeSpecName: "scripts") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.005407 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.044381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066083 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066116 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggbsf\" (UniqueName: \"kubernetes.io/projected/772c2a1c-acd4-4227-829d-e4235742b5f4-kube-api-access-ggbsf\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066132 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.066144 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.089125 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data" (OuterVolumeSpecName: "config-data") pod "772c2a1c-acd4-4227-829d-e4235742b5f4" (UID: "772c2a1c-acd4-4227-829d-e4235742b5f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.168446 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772c2a1c-acd4-4227-829d-e4235742b5f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.297620 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" path="/var/lib/kubelet/pods/4bc5b698-8fd6-4919-a02b-eb74665d83e0/volumes" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.298395 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" path="/var/lib/kubelet/pods/9b0f3f50-6ea0-4ee0-af75-c020e91c8495/volumes" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.766419 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.798945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.806635 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.821739 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822436 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822476 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822494 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822502 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822516 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822523 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822552 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822563 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822572 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822594 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822600 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: E0216 13:11:11.822610 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822617 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822902 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="sg-core" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822919 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822938 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-notification-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822950 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc5b698-8fd6-4919-a02b-eb74665d83e0" containerName="neutron-api" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822965 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="proxy-httpd" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822976 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822989 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" containerName="ceilometer-central-agent" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.822998 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0f3f50-6ea0-4ee0-af75-c020e91c8495" containerName="horizon-log" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.824958 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.827208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.827455 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.840093 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981286 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981308 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:11 crc kubenswrapper[4740]: I0216 13:11:11.981355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082776 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.082995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.083663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.089045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.090179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.091257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.099610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.100091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"ceilometer-0\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.139978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:12 crc kubenswrapper[4740]: I0216 13:11:12.989280 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:13 crc kubenswrapper[4740]: I0216 13:11:13.297274 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772c2a1c-acd4-4227-829d-e4235742b5f4" path="/var/lib/kubelet/pods/772c2a1c-acd4-4227-829d-e4235742b5f4/volumes" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.104027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.104372 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.135770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.148128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.575644 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.575708 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.807629 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:11:15 crc kubenswrapper[4740]: I0216 13:11:15.807668 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.818516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerStarted","Data":"7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b"} Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.838639 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" podStartSLOduration=1.5292308559999999 podStartE2EDuration="9.838622298s" podCreationTimestamp="2026-02-16 13:11:07 +0000 UTC" firstStartedPulling="2026-02-16 13:11:08.259495349 +0000 UTC m=+1095.635844070" lastFinishedPulling="2026-02-16 13:11:16.568886791 +0000 UTC m=+1103.945235512" observedRunningTime="2026-02-16 13:11:16.835121317 +0000 UTC m=+1104.211470038" watchObservedRunningTime="2026-02-16 13:11:16.838622298 +0000 UTC m=+1104.214971019" Feb 16 13:11:16 crc kubenswrapper[4740]: I0216 13:11:16.974001 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.829504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70"} Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.829874 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc"} Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.832424 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.833848 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:11:17 crc kubenswrapper[4740]: I0216 13:11:17.864342 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.097573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.097622 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.131861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.139600 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.844543 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939"} Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.844903 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:18 crc kubenswrapper[4740]: I0216 13:11:18.845456 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:19 crc kubenswrapper[4740]: I0216 13:11:19.852769 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b"} Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.828220 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.844115 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerStarted","Data":"10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd"} Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864497 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" containerID="cri-o://3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864552 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" containerID="cri-o://f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864530 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" containerID="cri-o://10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.864547 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" containerID="cri-o://a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" gracePeriod=30 Feb 16 13:11:20 crc kubenswrapper[4740]: I0216 13:11:20.911315 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.789800044 podStartE2EDuration="9.911299242s" podCreationTimestamp="2026-02-16 13:11:11 +0000 UTC" firstStartedPulling="2026-02-16 13:11:16.979566252 +0000 UTC m=+1104.355914973" lastFinishedPulling="2026-02-16 13:11:20.10106545 +0000 UTC m=+1107.477414171" observedRunningTime="2026-02-16 13:11:20.908891266 +0000 UTC m=+1108.285239977" watchObservedRunningTime="2026-02-16 13:11:20.911299242 +0000 UTC m=+1108.287647963" Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.875654 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" exitCode=0 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876008 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" exitCode=2 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876024 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" exitCode=0 Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.875859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd"} Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876928 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b"} Feb 16 13:11:21 crc kubenswrapper[4740]: I0216 13:11:21.876943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.918989 4740 generic.go:334] "Generic (PLEG): container finished" podID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerID="3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" exitCode=0 Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919052 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95a355c5-9192-49fa-9d5d-ca9d1cba83c5","Type":"ContainerDied","Data":"c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc"} Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.919694 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5df5e688bb71ab81893ca2057b6b0cc06003b38877fb4e309ba51316596edfc" Feb 16 13:11:25 crc kubenswrapper[4740]: I0216 13:11:25.920717 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.061871 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.061938 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062006 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062074 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062115 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") pod \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\" (UID: \"95a355c5-9192-49fa-9d5d-ca9d1cba83c5\") " Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062739 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.062900 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.075910 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts" (OuterVolumeSpecName: "scripts") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.078199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p" (OuterVolumeSpecName: "kube-api-access-9ps5p") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "kube-api-access-9ps5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.089925 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.155874 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data" (OuterVolumeSpecName: "config-data") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164048 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164083 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ps5p\" (UniqueName: \"kubernetes.io/projected/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-kube-api-access-9ps5p\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164098 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164108 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164119 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.164130 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.170018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95a355c5-9192-49fa-9d5d-ca9d1cba83c5" (UID: "95a355c5-9192-49fa-9d5d-ca9d1cba83c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.266136 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a355c5-9192-49fa-9d5d-ca9d1cba83c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.931163 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.964606 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.972470 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992206 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992683 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992719 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992738 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992746 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992771 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992779 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: E0216 13:11:26.992791 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.992799 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993066 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="proxy-httpd" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993085 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-central-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993106 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="ceilometer-notification-agent" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.993125 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" containerName="sg-core" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.995238 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.999445 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:26 crc kubenswrapper[4740]: I0216 13:11:26.999816 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.014420 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117183 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117354 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.117389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218697 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.218902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.219355 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.219598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.224875 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.225119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.225709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.228715 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.242568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"ceilometer-0\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.297961 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a355c5-9192-49fa-9d5d-ca9d1cba83c5" path="/var/lib/kubelet/pods/95a355c5-9192-49fa-9d5d-ca9d1cba83c5/volumes" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.320612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.767942 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:27 crc kubenswrapper[4740]: W0216 13:11:27.770724 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7d441d_037c_4d9b_a593_295360acb873.slice/crio-ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c WatchSource:0}: Error finding container ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c: Status 404 returned error can't find the container with id ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.940645 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerID="7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b" exitCode=0 Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.940718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerDied","Data":"7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b"} Feb 16 13:11:27 crc kubenswrapper[4740]: I0216 13:11:27.942146 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c"} Feb 16 13:11:28 crc kubenswrapper[4740]: I0216 13:11:28.955992 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f"} Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.309566 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459532 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459635 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459741 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.459779 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") pod \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\" (UID: \"2fce641e-1b76-4b99-a99d-9a0ccbf9680e\") " Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.472044 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn" (OuterVolumeSpecName: "kube-api-access-qjmbn") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "kube-api-access-qjmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.472154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts" (OuterVolumeSpecName: "scripts") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.486759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data" (OuterVolumeSpecName: "config-data") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.508470 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fce641e-1b76-4b99-a99d-9a0ccbf9680e" (UID: "2fce641e-1b76-4b99-a99d-9a0ccbf9680e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561396 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561430 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561440 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjmbn\" (UniqueName: \"kubernetes.io/projected/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-kube-api-access-qjmbn\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.561452 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fce641e-1b76-4b99-a99d-9a0ccbf9680e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.965996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" event={"ID":"2fce641e-1b76-4b99-a99d-9a0ccbf9680e","Type":"ContainerDied","Data":"10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd"} Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.966039 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c8a7e878f9fe5fa072e8c00c9c54571fe3bd018e9f6c0b2f6a9a4e99645ffd" Feb 16 13:11:29 crc kubenswrapper[4740]: I0216 13:11:29.966079 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bfmpm" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.128922 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: E0216 13:11:30.129675 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.129700 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.129975 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" containerName="nova-cell0-conductor-db-sync" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.130679 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.133322 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.134489 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.140360 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.277542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379591 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379682 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.379862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.384235 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.387262 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.396915 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"nova-cell0-conductor-0\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.495578 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.922805 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:30 crc kubenswrapper[4740]: W0216 13:11:30.930961 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod177b2d8c_29ab_49ea_8509_12b489123ad9.slice/crio-e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91 WatchSource:0}: Error finding container e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91: Status 404 returned error can't find the container with id e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91 Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.979026 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.979078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.980328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerStarted","Data":"e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91"} Feb 16 13:11:30 crc kubenswrapper[4740]: I0216 13:11:30.990333 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerStarted","Data":"29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068"} Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992576 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 13:11:31 crc kubenswrapper[4740]: I0216 13:11:31.992410 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" containerID="cri-o://29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" gracePeriod=30 Feb 16 13:11:32 crc kubenswrapper[4740]: I0216 13:11:32.021911 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.021860579 podStartE2EDuration="2.021860579s" podCreationTimestamp="2026-02-16 13:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:11:32.015735866 +0000 UTC m=+1119.392084587" watchObservedRunningTime="2026-02-16 13:11:32.021860579 +0000 UTC m=+1119.398209320" Feb 16 13:11:32 crc kubenswrapper[4740]: I0216 13:11:32.736436 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.005264 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerStarted","Data":"15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474"} Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.006740 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:33 crc kubenswrapper[4740]: I0216 13:11:33.030979 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.772290543 podStartE2EDuration="7.030960403s" podCreationTimestamp="2026-02-16 13:11:26 +0000 UTC" firstStartedPulling="2026-02-16 13:11:27.772795233 +0000 UTC m=+1115.149143954" lastFinishedPulling="2026-02-16 13:11:32.031465093 +0000 UTC m=+1119.407813814" observedRunningTime="2026-02-16 13:11:33.027942137 +0000 UTC m=+1120.404290878" watchObservedRunningTime="2026-02-16 13:11:33.030960403 +0000 UTC m=+1120.407309134" Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014146 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" containerID="cri-o://af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014726 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" containerID="cri-o://15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.014852 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" containerID="cri-o://b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" gracePeriod=30 Feb 16 13:11:34 crc kubenswrapper[4740]: I0216 13:11:34.015046 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" containerID="cri-o://b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" gracePeriod=30 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026087 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026470 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" exitCode=2 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026481 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026491 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f7d441d-037c-4d9b-a593-295360acb873" containerID="af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" exitCode=0 Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.026576 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f"} Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.275586 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373543 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373843 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373878 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.373952 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") pod \"9f7d441d-037c-4d9b-a593-295360acb873\" (UID: \"9f7d441d-037c-4d9b-a593-295360acb873\") " Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.374403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.374529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.379207 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts" (OuterVolumeSpecName: "scripts") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.384018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9" (OuterVolumeSpecName: "kube-api-access-txnw9") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "kube-api-access-txnw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.401671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.458952 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475902 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475941 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475956 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475967 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f7d441d-037c-4d9b-a593-295360acb873-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475982 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/9f7d441d-037c-4d9b-a593-295360acb873-kube-api-access-txnw9\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.475997 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.478229 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data" (OuterVolumeSpecName: "config-data") pod "9f7d441d-037c-4d9b-a593-295360acb873" (UID: "9f7d441d-037c-4d9b-a593-295360acb873"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:11:35 crc kubenswrapper[4740]: I0216 13:11:35.577584 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f7d441d-037c-4d9b-a593-295360acb873-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f7d441d-037c-4d9b-a593-295360acb873","Type":"ContainerDied","Data":"ca966942a1948c048e34dd7db2248c602d2368f1200c700463ea4b5111339e2c"} Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040347 4740 scope.go:117] "RemoveContainer" containerID="15ce57e0100f22576756ff2e0ea84492ae15d4243e46123fc4c41c853a4db474" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.040371 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.059171 4740 scope.go:117] "RemoveContainer" containerID="b848f033913ec604afefeb06b722f5ccea1a3a3dbf64288c6d13e841b678efd1" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.079932 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.082364 4740 scope.go:117] "RemoveContainer" containerID="b1ecd77baacb422358d8a5e9615014969aa6e2e298364e95fd12faac16699d11" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.091261 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.105608 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.105965 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.105983 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106015 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106021 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106034 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: E0216 13:11:36.106056 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106061 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106248 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="proxy-httpd" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106262 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="sg-core" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106280 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-notification-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.106291 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7d441d-037c-4d9b-a593-295360acb873" containerName="ceilometer-central-agent" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.108868 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.110618 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.110716 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.119503 4740 scope.go:117] "RemoveContainer" containerID="af71ecf0eaf794ce88d60d93ac05685fd22518f9e51a07094d60474546203d7f" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.119691 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291220 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291396 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291659 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291841 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.291993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394512 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.394688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.396651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.396923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.399501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.400083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.401450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.410198 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.413126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ceilometer-0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.427508 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:11:36 crc kubenswrapper[4740]: I0216 13:11:36.899152 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:11:36 crc kubenswrapper[4740]: W0216 13:11:36.902657 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6a38c8_9c96_44fc_a4cc_247e314350b0.slice/crio-8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102 WatchSource:0}: Error finding container 8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102: Status 404 returned error can't find the container with id 8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102 Feb 16 13:11:37 crc kubenswrapper[4740]: I0216 13:11:37.050484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102"} Feb 16 13:11:37 crc kubenswrapper[4740]: I0216 13:11:37.291899 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7d441d-037c-4d9b-a593-295360acb873" path="/var/lib/kubelet/pods/9f7d441d-037c-4d9b-a593-295360acb873/volumes" Feb 16 13:11:38 crc kubenswrapper[4740]: I0216 13:11:38.076747 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} Feb 16 13:11:39 crc kubenswrapper[4740]: I0216 13:11:39.087664 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} Feb 16 13:11:39 crc kubenswrapper[4740]: I0216 13:11:39.088023 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.498855 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.502060 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.503644 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:40 crc kubenswrapper[4740]: E0216 13:11:40.503686 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.107792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerStarted","Data":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.107999 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:11:41 crc kubenswrapper[4740]: I0216 13:11:41.128333 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.979587201 podStartE2EDuration="5.128311647s" podCreationTimestamp="2026-02-16 13:11:36 +0000 UTC" firstStartedPulling="2026-02-16 13:11:36.906037016 +0000 UTC m=+1124.282385737" lastFinishedPulling="2026-02-16 13:11:40.054761462 +0000 UTC m=+1127.431110183" observedRunningTime="2026-02-16 13:11:41.126630944 +0000 UTC m=+1128.502979685" watchObservedRunningTime="2026-02-16 13:11:41.128311647 +0000 UTC m=+1128.504660378" Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.499408 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.500952 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.502575 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:45 crc kubenswrapper[4740]: E0216 13:11:45.502610 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575185 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575265 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.575321 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.576416 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:11:45 crc kubenswrapper[4740]: I0216 13:11:45.576520 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" gracePeriod=600 Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163015 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" exitCode=0 Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f"} Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} Feb 16 13:11:46 crc kubenswrapper[4740]: I0216 13:11:46.163520 4740 scope.go:117] "RemoveContainer" containerID="edadbed859a270c1b38ed01c0d5610184bd03721e8156d3fbbf92fbf10e405b3" Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.502418 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.506367 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.507990 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:50 crc kubenswrapper[4740]: E0216 13:11:50.508067 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.498779 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.501312 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.502701 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:11:55 crc kubenswrapper[4740]: E0216 13:11:55.502825 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.499059 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.501364 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.503352 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 13:12:00 crc kubenswrapper[4740]: E0216 13:12:00.503412 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.327622 4740 generic.go:334] "Generic (PLEG): container finished" podID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" exitCode=137 Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.327696 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerDied","Data":"29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068"} Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.451417 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.595923 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.596017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.596142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") pod \"177b2d8c-29ab-49ea-8509-12b489123ad9\" (UID: \"177b2d8c-29ab-49ea-8509-12b489123ad9\") " Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.604092 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6" (OuterVolumeSpecName: "kube-api-access-vlss6") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "kube-api-access-vlss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.622911 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data" (OuterVolumeSpecName: "config-data") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.626487 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "177b2d8c-29ab-49ea-8509-12b489123ad9" (UID: "177b2d8c-29ab-49ea-8509-12b489123ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697517 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697565 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177b2d8c-29ab-49ea-8509-12b489123ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:02 crc kubenswrapper[4740]: I0216 13:12:02.697582 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlss6\" (UniqueName: \"kubernetes.io/projected/177b2d8c-29ab-49ea-8509-12b489123ad9-kube-api-access-vlss6\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"177b2d8c-29ab-49ea-8509-12b489123ad9","Type":"ContainerDied","Data":"e1ff36319be19db06a0b37da7df05b96fab8d1846a2ea48349ac6f0003bfbf91"} Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337156 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.337451 4740 scope.go:117] "RemoveContainer" containerID="29b6fc3ea9c4f333b53c8403221916fd0d544c14689b86d48aa19756b314d068" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.359498 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.369557 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.396943 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: E0216 13:12:03.397993 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.398018 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.398605 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" containerName="nova-cell0-conductor-conductor" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.399480 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.403206 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.411426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-45m6j" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.412846 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.512986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.513074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.513114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.615474 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.622575 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.630406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07256285-a907-4822-80dc-b5f5866d437f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.633383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwwv\" (UniqueName: \"kubernetes.io/projected/07256285-a907-4822-80dc-b5f5866d437f-kube-api-access-jqwwv\") pod \"nova-cell0-conductor-0\" (UID: \"07256285-a907-4822-80dc-b5f5866d437f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:03 crc kubenswrapper[4740]: I0216 13:12:03.739516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:04 crc kubenswrapper[4740]: I0216 13:12:04.199554 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:12:04 crc kubenswrapper[4740]: I0216 13:12:04.354460 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"07256285-a907-4822-80dc-b5f5866d437f","Type":"ContainerStarted","Data":"bb82e6aade05f9fff17a43670718f2e4c7d6a343b3ebcdf2241c634c7a6e945f"} Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.301795 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177b2d8c-29ab-49ea-8509-12b489123ad9" path="/var/lib/kubelet/pods/177b2d8c-29ab-49ea-8509-12b489123ad9/volumes" Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.381117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"07256285-a907-4822-80dc-b5f5866d437f","Type":"ContainerStarted","Data":"6fae93798a52b2b1b11faf82fd2a15be794c0723a2bd5840daef1bd25b3955b4"} Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.382946 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:05 crc kubenswrapper[4740]: I0216 13:12:05.413968 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.413946279 podStartE2EDuration="2.413946279s" podCreationTimestamp="2026-02-16 13:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:05.403736656 +0000 UTC m=+1152.780085387" watchObservedRunningTime="2026-02-16 13:12:05.413946279 +0000 UTC m=+1152.790295010" Feb 16 13:12:06 crc kubenswrapper[4740]: I0216 13:12:06.435962 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:12:09 crc kubenswrapper[4740]: I0216 13:12:09.913625 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:09 crc kubenswrapper[4740]: I0216 13:12:09.914344 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" containerID="cri-o://393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" gracePeriod=30 Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425688 4740 generic.go:334] "Generic (PLEG): container finished" podID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerID="393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" exitCode=2 Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425942 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerDied","Data":"393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb"} Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dffdca64-bf57-49ca-9d8d-c6c752e59a37","Type":"ContainerDied","Data":"866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d"} Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.425977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866ce044188afcf46006c2ed0b66b350f821d5770cb7a1497b35d5de1fe51c2d" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.481754 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.560845 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") pod \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\" (UID: \"dffdca64-bf57-49ca-9d8d-c6c752e59a37\") " Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.576129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs" (OuterVolumeSpecName: "kube-api-access-fpkcs") pod "dffdca64-bf57-49ca-9d8d-c6c752e59a37" (UID: "dffdca64-bf57-49ca-9d8d-c6c752e59a37"). InnerVolumeSpecName "kube-api-access-fpkcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:10 crc kubenswrapper[4740]: I0216 13:12:10.662714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpkcs\" (UniqueName: \"kubernetes.io/projected/dffdca64-bf57-49ca-9d8d-c6c752e59a37-kube-api-access-fpkcs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.434071 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.457112 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.470072 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.512655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: E0216 13:12:11.513922 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.513943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.514335 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" containerName="kube-state-metrics" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.515250 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.517136 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.517796 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.523749 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.588841 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.588953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.589006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.589131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.690773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.698850 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.699463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.704285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.714526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqn4z\" (UniqueName: \"kubernetes.io/projected/05c7ea6d-5a24-4b21-851c-e7d51fa61a38-kube-api-access-cqn4z\") pod \"kube-state-metrics-0\" (UID: \"05c7ea6d-5a24-4b21-851c-e7d51fa61a38\") " pod="openstack/kube-state-metrics-0" Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804334 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804673 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" containerID="cri-o://c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804703 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" containerID="cri-o://0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.804792 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" containerID="cri-o://4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.805020 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" containerID="cri-o://6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" gracePeriod=30 Feb 16 13:12:11 crc kubenswrapper[4740]: I0216 13:12:11.833689 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.272464 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.446559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05c7ea6d-5a24-4b21-851c-e7d51fa61a38","Type":"ContainerStarted","Data":"f2ae81f20227596beb1c048c192eb62e88f8dbb91ec4dc48019e683a75a9d3ba"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449941 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" exitCode=0 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449964 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" exitCode=2 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449971 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" exitCode=0 Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.449983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.450004 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} Feb 16 13:12:12 crc kubenswrapper[4740]: I0216 13:12:12.450013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.293222 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffdca64-bf57-49ca-9d8d-c6c752e59a37" path="/var/lib/kubelet/pods/dffdca64-bf57-49ca-9d8d-c6c752e59a37/volumes" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.460246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05c7ea6d-5a24-4b21-851c-e7d51fa61a38","Type":"ContainerStarted","Data":"d7eac0d86b4b1671eb3ce7edb75266b95feed3d4860ce524d31e6b6e05ac8c57"} Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.461283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.477478 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.125804058 podStartE2EDuration="2.477460527s" podCreationTimestamp="2026-02-16 13:12:11 +0000 UTC" firstStartedPulling="2026-02-16 13:12:12.2788761 +0000 UTC m=+1159.655224821" lastFinishedPulling="2026-02-16 13:12:12.630532569 +0000 UTC m=+1160.006881290" observedRunningTime="2026-02-16 13:12:13.475116863 +0000 UTC m=+1160.851465594" watchObservedRunningTime="2026-02-16 13:12:13.477460527 +0000 UTC m=+1160.853809248" Feb 16 13:12:13 crc kubenswrapper[4740]: I0216 13:12:13.771657 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.268318 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.273542 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.276105 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.276268 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.285540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.341979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.438018 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444539 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.444592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.453576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.463475 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.482838 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.492571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"nova-cell0-cell-mapping-8f7gd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512113 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" exitCode=0 Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512467 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6a38c8-9c96-44fc-a4cc-247e314350b0","Type":"ContainerDied","Data":"8389d1c57a3628927feb7f7fa0fa61763312bafbfe5d743cb250a469e83bb102"} Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.512630 4740 scope.go:117] "RemoveContainer" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.542928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543281 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543293 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543327 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543333 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543344 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543350 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.543372 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543378 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543524 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="sg-core" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543544 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-notification-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543557 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="ceilometer-central-agent" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.543567 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" containerName="proxy-httpd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.546616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550877 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.550935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551067 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551155 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") pod \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\" (UID: \"ef6a38c8-9c96-44fc-a4cc-247e314350b0\") " Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.551916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.552183 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.553109 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.559373 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.576380 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.578759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts" (OuterVolumeSpecName: "scripts") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.585529 4740 scope.go:117] "RemoveContainer" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.585562 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24" (OuterVolumeSpecName: "kube-api-access-mrd24") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "kube-api-access-mrd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.600744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.647761 4740 scope.go:117] "RemoveContainer" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.655907 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656061 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656131 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrd24\" (UniqueName: \"kubernetes.io/projected/ef6a38c8-9c96-44fc-a4cc-247e314350b0-kube-api-access-mrd24\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656142 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.656153 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6a38c8-9c96-44fc-a4cc-247e314350b0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.678763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.687681 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.701957 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.708315 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.714794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725126 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725159 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.725230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.734770 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.736182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.736754 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.737396 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.741007 4740 scope.go:117] "RemoveContainer" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.750024 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757365 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757435 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757464 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.757746 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.758592 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.762713 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.763645 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.764177 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.790612 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.790681 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.800224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"nova-scheduler-0\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.799118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data" (OuterVolumeSpecName: "config-data") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.841917 4740 scope.go:117] "RemoveContainer" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.843824 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": container with ID starting with 0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991 not found: ID does not exist" containerID="0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.843862 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991"} err="failed to get container status \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": rpc error: code = NotFound desc = could not find container \"0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991\": container with ID starting with 0f54f13573764c9060af787a8fa40cfeec86741d8ccffb0c7141d41d1c39f991 not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.843888 4740 scope.go:117] "RemoveContainer" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844417 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": container with ID starting with 6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706 not found: ID does not exist" containerID="6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844433 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706"} err="failed to get container status \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": rpc error: code = NotFound desc = could not find container \"6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706\": container with ID starting with 6833142c20a1a11d199556a1bc8fdb69dfe7a7b2b2b7bbd2b7bf6b629f9a2706 not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844446 4740 scope.go:117] "RemoveContainer" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844748 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": container with ID starting with 4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d not found: ID does not exist" containerID="4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844763 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d"} err="failed to get container status \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": rpc error: code = NotFound desc = could not find container \"4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d\": container with ID starting with 4ed161bd8247076c211d3f1b773561ee9c43f98790c68cd03305cca80fbaf90d not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844775 4740 scope.go:117] "RemoveContainer" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: E0216 13:12:14.844963 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": container with ID starting with c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f not found: ID does not exist" containerID="c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.844977 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f"} err="failed to get container status \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": rpc error: code = NotFound desc = could not find container \"c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f\": container with ID starting with c739721103db74b4b4fc2c2e2cd95b0862d76611abb9bd7e217db8c7140cdb9f not found: ID does not exist" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.863931 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef6a38c8-9c96-44fc-a4cc-247e314350b0" (UID: "ef6a38c8-9c96-44fc-a4cc-247e314350b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864722 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.864840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865119 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865624 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865781 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.865794 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6a38c8-9c96-44fc-a4cc-247e314350b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.867841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.868043 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.868393 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.871598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.871953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.872173 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.882560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.895124 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.899032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"nova-cell1-novncproxy-0\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.899469 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"nova-metadata-0\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " pod="openstack/nova-metadata-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.903891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"nova-api-0\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " pod="openstack/nova-api-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971898 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.971995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.972061 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973160 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.973334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.974000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.975082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.979396 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:14 crc kubenswrapper[4740]: I0216 13:12:14.989833 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"dnsmasq-dns-865f5d856f-zmbdr\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.096605 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.131364 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.143686 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.153497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.157910 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.163789 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.219485 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.248078 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.248225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.257352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.257637 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.269386 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.273174 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292669 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292719 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.292792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.304000 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6a38c8-9c96-44fc-a4cc-247e314350b0" path="/var/lib/kubelet/pods/ef6a38c8-9c96-44fc-a4cc-247e314350b0/volumes" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.394965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395205 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.395434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.397138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.406676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.408258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.408497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.413525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.429893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.448030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"ceilometer-0\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.558025 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.561361 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb162a0_b3b3_4b3a_b8f2_4d3a2997437f.slice/crio-049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc WatchSource:0}: Error finding container 049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc: Status 404 returned error can't find the container with id 049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.562380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerStarted","Data":"b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd"} Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.593287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.827686 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.838522 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620a5f5a_99ec_4801_ba17_d9a5fa7ea7ac.slice/crio-8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103 WatchSource:0}: Error finding container 8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103: Status 404 returned error can't find the container with id 8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103 Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.839744 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.910324 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.911915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.915779 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.916413 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.940155 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:15 crc kubenswrapper[4740]: W0216 13:12:15.989100 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod013bed9b_6a31_4094_bb95_addbf3f4bd01.slice/crio-3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2 WatchSource:0}: Error finding container 3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2: Status 404 returned error can't find the container with id 3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2 Feb 16 13:12:15 crc kubenswrapper[4740]: I0216 13:12:15.989346 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023434 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023591 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.023703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: W0216 13:12:16.110142 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c1f9b3_2d24_4a15_a9ed_1e580d07368d.slice/crio-0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4 WatchSource:0}: Error finding container 0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4: Status 404 returned error can't find the container with id 0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.111788 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125304 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.125370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.130900 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.135016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.139883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.144542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"nova-cell1-conductor-db-sync-8crw8\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.249287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.252847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:16 crc kubenswrapper[4740]: W0216 13:12:16.268133 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7722219_9b84_4adf_bf81_c4ac8a0d9d2c.slice/crio-b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593 WatchSource:0}: Error finding container b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593: Status 404 returned error can't find the container with id b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.586984 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerStarted","Data":"049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.588975 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerStarted","Data":"3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.591524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerStarted","Data":"0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.595175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597198 4740 generic.go:334] "Generic (PLEG): container finished" podID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerID="427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207" exitCode=0 Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.597302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerStarted","Data":"0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.604454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.609494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"e04f019c32476631dfc276b161c2d8004e68193c635f2f2f3e656fded80b6609"} Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.665351 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8f7gd" podStartSLOduration=2.665329099 podStartE2EDuration="2.665329099s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:16.611206002 +0000 UTC m=+1163.987554733" watchObservedRunningTime="2026-02-16 13:12:16.665329099 +0000 UTC m=+1164.041677820" Feb 16 13:12:16 crc kubenswrapper[4740]: I0216 13:12:16.702233 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.624275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.626426 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerStarted","Data":"8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.627923 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.637212 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerStarted","Data":"2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.637280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerStarted","Data":"53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c"} Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.680398 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" podStartSLOduration=3.680357508 podStartE2EDuration="3.680357508s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:17.655148513 +0000 UTC m=+1165.031497244" watchObservedRunningTime="2026-02-16 13:12:17.680357508 +0000 UTC m=+1165.056706229" Feb 16 13:12:17 crc kubenswrapper[4740]: I0216 13:12:17.697228 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8crw8" podStartSLOduration=2.697207759 podStartE2EDuration="2.697207759s" podCreationTimestamp="2026-02-16 13:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:17.677444136 +0000 UTC m=+1165.053792857" watchObservedRunningTime="2026-02-16 13:12:17.697207759 +0000 UTC m=+1165.073556480" Feb 16 13:12:18 crc kubenswrapper[4740]: I0216 13:12:18.812928 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:18 crc kubenswrapper[4740]: I0216 13:12:18.839593 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.658818 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerStarted","Data":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.661121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerStarted","Data":"23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.661223 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.668210 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.675639 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.675687 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerStarted","Data":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677623 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerStarted","Data":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677683 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" containerID="cri-o://5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.677725 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" containerID="cri-o://5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" gracePeriod=30 Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.691445 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.486322844 podStartE2EDuration="5.691427676s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.568115538 +0000 UTC m=+1162.944464259" lastFinishedPulling="2026-02-16 13:12:18.77322037 +0000 UTC m=+1166.149569091" observedRunningTime="2026-02-16 13:12:19.683101034 +0000 UTC m=+1167.059449755" watchObservedRunningTime="2026-02-16 13:12:19.691427676 +0000 UTC m=+1167.067776397" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.712013 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.780831578 podStartE2EDuration="5.711994981s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.845944078 +0000 UTC m=+1163.222292809" lastFinishedPulling="2026-02-16 13:12:18.777107491 +0000 UTC m=+1166.153456212" observedRunningTime="2026-02-16 13:12:19.705002122 +0000 UTC m=+1167.081350863" watchObservedRunningTime="2026-02-16 13:12:19.711994981 +0000 UTC m=+1167.088343702" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.755372 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.811418723 podStartE2EDuration="5.755355814s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.870237715 +0000 UTC m=+1163.246586436" lastFinishedPulling="2026-02-16 13:12:18.814174806 +0000 UTC m=+1166.190523527" observedRunningTime="2026-02-16 13:12:19.732132754 +0000 UTC m=+1167.108481475" watchObservedRunningTime="2026-02-16 13:12:19.755355814 +0000 UTC m=+1167.131704545" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.760027 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.934487964 podStartE2EDuration="5.76000892s" podCreationTimestamp="2026-02-16 13:12:14 +0000 UTC" firstStartedPulling="2026-02-16 13:12:15.991412516 +0000 UTC m=+1163.367761247" lastFinishedPulling="2026-02-16 13:12:18.816933482 +0000 UTC m=+1166.193282203" observedRunningTime="2026-02-16 13:12:19.750255734 +0000 UTC m=+1167.126604455" watchObservedRunningTime="2026-02-16 13:12:19.76000892 +0000 UTC m=+1167.136357641" Feb 16 13:12:19 crc kubenswrapper[4740]: I0216 13:12:19.980237 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.132391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.132433 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.144775 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.507020 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.651870 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") pod \"7687ffed-8f01-4301-a20c-5feba63ac079\" (UID: \"7687ffed-8f01-4301-a20c-5feba63ac079\") " Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.652073 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs" (OuterVolumeSpecName: "logs") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.652685 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7687ffed-8f01-4301-a20c-5feba63ac079-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.658513 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv" (OuterVolumeSpecName: "kube-api-access-zctqv") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "kube-api-access-zctqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.683541 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689562 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689647 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689539 4740 generic.go:334] "Generic (PLEG): container finished" podID="7687ffed-8f01-4301-a20c-5feba63ac079" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" exitCode=0 Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689936 4740 generic.go:334] "Generic (PLEG): container finished" podID="7687ffed-8f01-4301-a20c-5feba63ac079" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" exitCode=143 Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.689988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.690003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7687ffed-8f01-4301-a20c-5feba63ac079","Type":"ContainerDied","Data":"e04f019c32476631dfc276b161c2d8004e68193c635f2f2f3e656fded80b6609"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.696228 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614"} Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.701956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data" (OuterVolumeSpecName: "config-data") pod "7687ffed-8f01-4301-a20c-5feba63ac079" (UID: "7687ffed-8f01-4301-a20c-5feba63ac079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.724006 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747130 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: E0216 13:12:20.747568 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747600 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} err="failed to get container status \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.747629 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: E0216 13:12:20.748176 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748198 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} err="failed to get container status \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748211 4740 scope.go:117] "RemoveContainer" containerID="5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748588 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44"} err="failed to get container status \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": rpc error: code = NotFound desc = could not find container \"5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44\": container with ID starting with 5a0c642f5bb984552b36e71e063ef4b1daaa5b141a296c8c66b36d7c3d161f44 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.748641 4740 scope.go:117] "RemoveContainer" containerID="5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.749087 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921"} err="failed to get container status \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": rpc error: code = NotFound desc = could not find container \"5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921\": container with ID starting with 5702277778df9569367cb648eaa8ab778d47f6360255ed3b04fbcdd9a52a5921 not found: ID does not exist" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754734 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754765 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zctqv\" (UniqueName: \"kubernetes.io/projected/7687ffed-8f01-4301-a20c-5feba63ac079-kube-api-access-zctqv\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:20 crc kubenswrapper[4740]: I0216 13:12:20.754775 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7687ffed-8f01-4301-a20c-5feba63ac079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.027440 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.039726 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.061579 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: E0216 13:12:21.062283 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062415 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: E0216 13:12:21.062500 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.062911 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-log" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.063007 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" containerName="nova-metadata-metadata" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.067089 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.069376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.069661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.080936 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163077 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.163914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.164135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.164350 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.266868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.267571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.272542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.272562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.288528 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.288953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.297303 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7687ffed-8f01-4301-a20c-5feba63ac079" path="/var/lib/kubelet/pods/7687ffed-8f01-4301-a20c-5feba63ac079/volumes" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.388007 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.856595 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:12:21 crc kubenswrapper[4740]: I0216 13:12:21.906671 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.717743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerStarted","Data":"0641bfa13271f6c66fab9d171fca52a4133710b322c360f7da95c74871437730"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.723288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerStarted","Data":"b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096"} Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.723665 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.748447 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.748424143 podStartE2EDuration="1.748424143s" podCreationTimestamp="2026-02-16 13:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:22.738512131 +0000 UTC m=+1170.114860852" watchObservedRunningTime="2026-02-16 13:12:22.748424143 +0000 UTC m=+1170.124772864" Feb 16 13:12:22 crc kubenswrapper[4740]: I0216 13:12:22.770593 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.571676365 podStartE2EDuration="7.770571638s" podCreationTimestamp="2026-02-16 13:12:15 +0000 UTC" firstStartedPulling="2026-02-16 13:12:16.2737363 +0000 UTC m=+1163.650085021" lastFinishedPulling="2026-02-16 13:12:21.472631583 +0000 UTC m=+1168.848980294" observedRunningTime="2026-02-16 13:12:22.76140425 +0000 UTC m=+1170.137752981" watchObservedRunningTime="2026-02-16 13:12:22.770571638 +0000 UTC m=+1170.146920359" Feb 16 13:12:23 crc kubenswrapper[4740]: I0216 13:12:23.734600 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerID="0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4" exitCode=0 Feb 16 13:12:23 crc kubenswrapper[4740]: I0216 13:12:23.734801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerDied","Data":"0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4"} Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.747285 4740 generic.go:334] "Generic (PLEG): container finished" podID="975c922d-b91a-4cf6-9739-0d478d19765a" containerID="2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5" exitCode=0 Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.747446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerDied","Data":"2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5"} Feb 16 13:12:24 crc kubenswrapper[4740]: I0216 13:12:24.979919 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.005065 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.081702 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.097891 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.098142 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.160636 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162745 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162832 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.162983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") pod \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\" (UID: \"9f4deadb-18ac-4d06-ba22-e391b19d38cd\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.171057 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj" (OuterVolumeSpecName: "kube-api-access-vz5wj") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "kube-api-access-vz5wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.180518 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts" (OuterVolumeSpecName: "scripts") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.204976 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data" (OuterVolumeSpecName: "config-data") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.211004 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f4deadb-18ac-4d06-ba22-e391b19d38cd" (UID: "9f4deadb-18ac-4d06-ba22-e391b19d38cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.230244 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.230466 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" containerID="cri-o://3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" gracePeriod=10 Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265474 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265721 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz5wj\" (UniqueName: \"kubernetes.io/projected/9f4deadb-18ac-4d06-ba22-e391b19d38cd-kube-api-access-vz5wj\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265732 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.265741 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4deadb-18ac-4d06-ba22-e391b19d38cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.764700 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783740 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8f7gd" event={"ID":"9f4deadb-18ac-4d06-ba22-e391b19d38cd","Type":"ContainerDied","Data":"b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783832 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78b180e360c9e62f97483c5f9447dac4cda05293b92d3033ba99440494240fd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.783959 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8f7gd" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811124 4740 generic.go:334] "Generic (PLEG): container finished" podID="fecd834c-f149-401b-9c43-810e215a68ed" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" exitCode=0 Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811459 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811503 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" event={"ID":"fecd834c-f149-401b-9c43-810e215a68ed","Type":"ContainerDied","Data":"07d715fc080ad12a61c95f40dabacc8440c0cb90c7a33ab67e7813105918c946"} Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811522 4740 scope.go:117] "RemoveContainer" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.811674 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-hjtmw" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892334 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892402 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892443 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892469 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.892486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") pod \"fecd834c-f149-401b-9c43-810e215a68ed\" (UID: \"fecd834c-f149-401b-9c43-810e215a68ed\") " Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.896501 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp" (OuterVolumeSpecName: "kube-api-access-cllbp") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "kube-api-access-cllbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.916786 4740 scope.go:117] "RemoveContainer" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.964536 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.965843 4740 scope.go:117] "RemoveContainer" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: E0216 13:12:25.966480 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": container with ID starting with 3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3 not found: ID does not exist" containerID="3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.966584 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3"} err="failed to get container status \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": rpc error: code = NotFound desc = could not find container \"3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3\": container with ID starting with 3d5764ff7db6cd2e72eb4999627c9b2acc5f5cd60b19beeb552d2625436950b3 not found: ID does not exist" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.966673 4740 scope.go:117] "RemoveContainer" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: E0216 13:12:25.967085 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": container with ID starting with 7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa not found: ID does not exist" containerID="7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.967162 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa"} err="failed to get container status \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": rpc error: code = NotFound desc = could not find container \"7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa\": container with ID starting with 7115865dc08a769f9496146ea20e3f676cec5b3e95abaf73b8c2cde7b5b696aa not found: ID does not exist" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.972017 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.974748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.976936 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.988732 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994365 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cllbp\" (UniqueName: \"kubernetes.io/projected/fecd834c-f149-401b-9c43-810e215a68ed-kube-api-access-cllbp\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994392 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:25 crc kubenswrapper[4740]: I0216 13:12:25.994404 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000245 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config" (OuterVolumeSpecName: "config") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000278 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.000714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fecd834c-f149-401b-9c43-810e215a68ed" (UID: "fecd834c-f149-401b-9c43-810e215a68ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.012663 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.013048 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" containerID="cri-o://ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.013511 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" containerID="cri-o://894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096526 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096774 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.096783 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd834c-f149-401b-9c43-810e215a68ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.195022 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.195304 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.228582 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.248300 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-hjtmw"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.340195 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.388879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.389152 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520244 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520288 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.520422 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") pod \"975c922d-b91a-4cf6-9739-0d478d19765a\" (UID: \"975c922d-b91a-4cf6-9739-0d478d19765a\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.525172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts" (OuterVolumeSpecName: "scripts") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.526688 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj" (OuterVolumeSpecName: "kube-api-access-fbsrj") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "kube-api-access-fbsrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.549924 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data" (OuterVolumeSpecName: "config-data") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.557197 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "975c922d-b91a-4cf6-9739-0d478d19765a" (UID: "975c922d-b91a-4cf6-9739-0d478d19765a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.578786 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622276 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622302 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbsrj\" (UniqueName: \"kubernetes.io/projected/975c922d-b91a-4cf6-9739-0d478d19765a-kube-api-access-fbsrj\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622313 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.622325 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975c922d-b91a-4cf6-9739-0d478d19765a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723514 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.723531 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") pod \"b0575bfd-41e7-4099-8f6e-ccece45c8478\" (UID: \"b0575bfd-41e7-4099-8f6e-ccece45c8478\") " Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.724678 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs" (OuterVolumeSpecName: "logs") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.740975 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9" (OuterVolumeSpecName: "kube-api-access-7ptm9") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "kube-api-access-7ptm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.759148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data" (OuterVolumeSpecName: "config-data") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.765941 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.782903 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b0575bfd-41e7-4099-8f6e-ccece45c8478" (UID: "b0575bfd-41e7-4099-8f6e-ccece45c8478"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8crw8" event={"ID":"975c922d-b91a-4cf6-9739-0d478d19765a","Type":"ContainerDied","Data":"53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820435 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d2ffe580056abfb3ffff00eacb592e28263b522ff5fc6b6961c1abe2f0b38c" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.820513 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8crw8" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824604 4740 generic.go:334] "Generic (PLEG): container finished" podID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" exitCode=0 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824646 4740 generic.go:334] "Generic (PLEG): container finished" podID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" exitCode=143 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.824900 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" containerID="cri-o://4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825255 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825372 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0575bfd-41e7-4099-8f6e-ccece45c8478","Type":"ContainerDied","Data":"0641bfa13271f6c66fab9d171fca52a4133710b322c360f7da95c74871437730"} Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825451 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.825625 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" containerID="cri-o://1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.826042 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" containerID="cri-o://0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" gracePeriod=30 Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827060 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827610 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ptm9\" (UniqueName: \"kubernetes.io/projected/b0575bfd-41e7-4099-8f6e-ccece45c8478-kube-api-access-7ptm9\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827638 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827653 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0575bfd-41e7-4099-8f6e-ccece45c8478-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.827667 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0575bfd-41e7-4099-8f6e-ccece45c8478-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.860821 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861187 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861201 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861216 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861222 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861230 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861235 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861244 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="init" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861249 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="init" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861266 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861272 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.861295 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861300 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861446 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecd834c-f149-401b-9c43-810e215a68ed" containerName="dnsmasq-dns" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-metadata" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861476 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" containerName="nova-metadata-log" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861486 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" containerName="nova-manage" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.861494 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" containerName="nova-cell1-conductor-db-sync" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.862111 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.866715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.872681 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.879534 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.888640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.914886 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.936754 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.937614 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.937749 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} err="failed to get container status \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.937904 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: E0216 13:12:26.938382 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938413 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} err="failed to get container status \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938437 4740 scope.go:117] "RemoveContainer" containerID="894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938622 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6"} err="failed to get container status \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": rpc error: code = NotFound desc = could not find container \"894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6\": container with ID starting with 894c298abef75a190515d99f803f37ddbfb0b9d074be7ff60d96d35f3df2d9e6 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938652 4740 scope.go:117] "RemoveContainer" containerID="ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.938874 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887"} err="failed to get container status \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": rpc error: code = NotFound desc = could not find container \"ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887\": container with ID starting with ab814af73fc38a837477747bcd1a64fd3fda2a3a9965faff584cad180a492887 not found: ID does not exist" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.960927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.962761 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.964651 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.964993 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:12:26 crc kubenswrapper[4740]: I0216 13:12:26.970020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031509 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.031773 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134579 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.134921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.135038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.141095 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.141713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.170042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmpv\" (UniqueName: \"kubernetes.io/projected/4465f42a-9c2a-4aa7-9e45-fa28f78cddd7-kube-api-access-gdmpv\") pod \"nova-cell1-conductor-0\" (UID: \"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.183230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236576 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236644 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.236858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.237446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.242217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.243698 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.277228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.279678 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"nova-metadata-0\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.344115 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.364556 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0575bfd-41e7-4099-8f6e-ccece45c8478" path="/var/lib/kubelet/pods/b0575bfd-41e7-4099-8f6e-ccece45c8478/volumes" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.365552 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecd834c-f149-401b-9c43-810e215a68ed" path="/var/lib/kubelet/pods/fecd834c-f149-401b-9c43-810e215a68ed/volumes" Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.727968 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:12:27 crc kubenswrapper[4740]: W0216 13:12:27.730446 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4465f42a_9c2a_4aa7_9e45_fa28f78cddd7.slice/crio-5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5 WatchSource:0}: Error finding container 5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5: Status 404 returned error can't find the container with id 5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5 Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.836212 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7","Type":"ContainerStarted","Data":"5c026ddf515f5f93d40dd152e4c9c92d7f34e270b93d560516787917956503c5"} Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.840223 4740 generic.go:334] "Generic (PLEG): container finished" podID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" exitCode=143 Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.840377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} Feb 16 13:12:27 crc kubenswrapper[4740]: I0216 13:12:27.847616 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.859367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4465f42a-9c2a-4aa7-9e45-fa28f78cddd7","Type":"ContainerStarted","Data":"bcd2b379d359f09a2dbb968f30fb1ebd6a91973a94b139c6fd50973f12c6d387"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.860495 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863097 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.863344 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerStarted","Data":"ec2fb41ee7bc0398ae3bc21b2bd16713c705f380c7a143d9071f0092702463d2"} Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.882075 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.882057374 podStartE2EDuration="2.882057374s" podCreationTimestamp="2026-02-16 13:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:28.8765205 +0000 UTC m=+1176.252869231" watchObservedRunningTime="2026-02-16 13:12:28.882057374 +0000 UTC m=+1176.258406095" Feb 16 13:12:28 crc kubenswrapper[4740]: I0216 13:12:28.899355 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.899336227 podStartE2EDuration="2.899336227s" podCreationTimestamp="2026-02-16 13:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:28.893744731 +0000 UTC m=+1176.270093452" watchObservedRunningTime="2026-02-16 13:12:28.899336227 +0000 UTC m=+1176.275684938" Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.982234 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.985022 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.986566 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:12:29 crc kubenswrapper[4740]: E0216 13:12:29.986616 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.577096 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744543 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.744735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.750551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt" (OuterVolumeSpecName: "kube-api-access-gg4qt") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "kube-api-access-gg4qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:31 crc kubenswrapper[4740]: E0216 13:12:31.770979 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle podName:bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f nodeName:}" failed. No retries permitted until 2026-02-16 13:12:32.27094694 +0000 UTC m=+1179.647295661 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f") : error deleting /var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volume-subpaths: remove /var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volume-subpaths: no such file or directory Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.774308 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data" (OuterVolumeSpecName: "config-data") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.846765 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.846795 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4qt\" (UniqueName: \"kubernetes.io/projected/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-kube-api-access-gg4qt\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891215 4740 generic.go:334] "Generic (PLEG): container finished" podID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" exitCode=0 Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerDied","Data":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891271 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f","Type":"ContainerDied","Data":"049599c73a96f0a67b4dc198dd8d7eea6d60e002de96297a4f60108bb30585bc"} Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.891315 4740 scope.go:117] "RemoveContainer" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.914043 4740 scope.go:117] "RemoveContainer" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: E0216 13:12:31.914410 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": container with ID starting with 1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08 not found: ID does not exist" containerID="1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08" Feb 16 13:12:31 crc kubenswrapper[4740]: I0216 13:12:31.914464 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08"} err="failed to get container status \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": rpc error: code = NotFound desc = could not find container \"1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08\": container with ID starting with 1dad64d8a8071adad4fbc8aa44b5f24bce85195980092e5b3abb5c73a17f0b08 not found: ID does not exist" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.218194 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.346745 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.346802 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.357026 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") pod \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\" (UID: \"bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.391133 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" (UID: "bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.459023 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.529221 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.546513 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.557928 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.558353 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.558370 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.558546 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" containerName="nova-scheduler-scheduler" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.559197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.571434 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.573273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.662918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.666184 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764232 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764308 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") pod \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\" (UID: \"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac\") " Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764733 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.764868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.765301 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs" (OuterVolumeSpecName: "logs") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.767702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx" (OuterVolumeSpecName: "kube-api-access-f8nmx") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "kube-api-access-f8nmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.768048 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.768448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.785631 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"nova-scheduler-0\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.812199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.817172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data" (OuterVolumeSpecName: "config-data") pod "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" (UID: "620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866414 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8nmx\" (UniqueName: \"kubernetes.io/projected/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-kube-api-access-f8nmx\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866453 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866470 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.866483 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904509 4740 generic.go:334] "Generic (PLEG): container finished" podID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" exitCode=0 Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904570 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904606 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904645 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac","Type":"ContainerDied","Data":"8f6654002ae747ce606f1b1f23d169b4f4ec2bec72a65e7471234bf4a2173103"} Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.904665 4740 scope.go:117] "RemoveContainer" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.938963 4740 scope.go:117] "RemoveContainer" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.949657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.959180 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.962531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.970976 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.971379 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971398 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.971413 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971419 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971604 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-log" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.971633 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" containerName="nova-api-api" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.972597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.977758 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.986714 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.992089 4740 scope.go:117] "RemoveContainer" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.993635 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": container with ID starting with 0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7 not found: ID does not exist" containerID="0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.993684 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7"} err="failed to get container status \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": rpc error: code = NotFound desc = could not find container \"0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7\": container with ID starting with 0c5440165b002d8e68a994eb23945987fb2bf1c496da61bf6134eb15b1a487f7 not found: ID does not exist" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.993715 4740 scope.go:117] "RemoveContainer" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: E0216 13:12:32.996136 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": container with ID starting with 4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80 not found: ID does not exist" containerID="4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80" Feb 16 13:12:32 crc kubenswrapper[4740]: I0216 13:12:32.996176 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80"} err="failed to get container status \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": rpc error: code = NotFound desc = could not find container \"4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80\": container with ID starting with 4985494b07ffecc48d227b230b46b11ce740f46c2d0f201a7779916e6d9bcc80 not found: ID does not exist" Feb 16 13:12:33 crc kubenswrapper[4740]: E0216 13:12:33.034533 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620a5f5a_99ec_4801_ba17_d9a5fa7ea7ac.slice\": RecentStats: unable to find data in memory cache]" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069088 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.069278 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.170638 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.171736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.176514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.176574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.187316 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"nova-api-0\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.294288 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac" path="/var/lib/kubelet/pods/620a5f5a-99ec-4801-ba17-d9a5fa7ea7ac/volumes" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.294897 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f" path="/var/lib/kubelet/pods/bcb162a0-b3b3-4b3a-b8f2-4d3a2997437f/volumes" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.300773 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.535044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.737960 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:33 crc kubenswrapper[4740]: W0216 13:12:33.742103 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63770e5c_58d7_48e9_b7dc_b0ed093c5a01.slice/crio-c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c WatchSource:0}: Error finding container c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c: Status 404 returned error can't find the container with id c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.916430 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.918803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerStarted","Data":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.918857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerStarted","Data":"77398076beb6a4d90d4fcff474e340ac86b26fe45053c777aa70824a5ad08261"} Feb 16 13:12:33 crc kubenswrapper[4740]: I0216 13:12:33.946543 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.946519187 podStartE2EDuration="1.946519187s" podCreationTimestamp="2026-02-16 13:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:33.937996379 +0000 UTC m=+1181.314345100" watchObservedRunningTime="2026-02-16 13:12:33.946519187 +0000 UTC m=+1181.322867908" Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.929953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.930221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerStarted","Data":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} Feb 16 13:12:34 crc kubenswrapper[4740]: I0216 13:12:34.951414 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.951359094 podStartE2EDuration="2.951359094s" podCreationTimestamp="2026-02-16 13:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:34.946492211 +0000 UTC m=+1182.322840962" watchObservedRunningTime="2026-02-16 13:12:34.951359094 +0000 UTC m=+1182.327707835" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.345548 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.347627 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:12:37 crc kubenswrapper[4740]: I0216 13:12:37.963582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:12:38 crc kubenswrapper[4740]: I0216 13:12:38.368145 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:38 crc kubenswrapper[4740]: I0216 13:12:38.368561 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:42 crc kubenswrapper[4740]: I0216 13:12:42.963897 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:12:42 crc kubenswrapper[4740]: I0216 13:12:42.991219 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.051391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.301235 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:43 crc kubenswrapper[4740]: I0216 13:12:43.301571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:12:44 crc kubenswrapper[4740]: I0216 13:12:44.384111 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:44 crc kubenswrapper[4740]: I0216 13:12:44.384174 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:12:45 crc kubenswrapper[4740]: I0216 13:12:45.601901 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.355431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.361264 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:12:47 crc kubenswrapper[4740]: I0216 13:12:47.379607 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:12:48 crc kubenswrapper[4740]: I0216 13:12:48.072110 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.086405 4740 generic.go:334] "Generic (PLEG): container finished" podID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerID="23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" exitCode=137 Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.087679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerDied","Data":"23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde"} Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.249787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400125 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400232 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.400389 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") pod \"013bed9b-6a31-4094-bb95-addbf3f4bd01\" (UID: \"013bed9b-6a31-4094-bb95-addbf3f4bd01\") " Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.406541 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn" (OuterVolumeSpecName: "kube-api-access-mtlkn") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "kube-api-access-mtlkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.429634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data" (OuterVolumeSpecName: "config-data") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.431895 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "013bed9b-6a31-4094-bb95-addbf3f4bd01" (UID: "013bed9b-6a31-4094-bb95-addbf3f4bd01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502212 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502246 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtlkn\" (UniqueName: \"kubernetes.io/projected/013bed9b-6a31-4094-bb95-addbf3f4bd01-kube-api-access-mtlkn\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:50 crc kubenswrapper[4740]: I0216 13:12:50.502262 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/013bed9b-6a31-4094-bb95-addbf3f4bd01-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"013bed9b-6a31-4094-bb95-addbf3f4bd01","Type":"ContainerDied","Data":"3cfed7c43284629a6351a4ee50333996e9abd11f4199ce2c961af5116ad1bff2"} Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096960 4740 scope.go:117] "RemoveContainer" containerID="23eee7be68a23b02362f55885f674544ef0e19cd3bbf30ad4d01d2107403ffde" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.096975 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.126358 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.133589 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.155561 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: E0216 13:12:51.156044 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.156067 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.156328 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.157015 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.160077 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.160319 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.164443 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.167165 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.291722 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013bed9b-6a31-4094-bb95-addbf3f4bd01" path="/var/lib/kubelet/pods/013bed9b-6a31-4094-bb95-addbf3f4bd01/volumes" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.313942 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.313996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314285 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314486 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.314547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415677 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415708 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415730 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.415858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.422270 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.422329 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.423703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.424493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/94da2ded-002e-4aa6-9828-404bee84c146-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.434554 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6d6\" (UniqueName: \"kubernetes.io/projected/94da2ded-002e-4aa6-9828-404bee84c146-kube-api-access-rg6d6\") pod \"nova-cell1-novncproxy-0\" (UID: \"94da2ded-002e-4aa6-9828-404bee84c146\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.484732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:51 crc kubenswrapper[4740]: I0216 13:12:51.950902 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:12:52 crc kubenswrapper[4740]: I0216 13:12:52.108464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"94da2ded-002e-4aa6-9828-404bee84c146","Type":"ContainerStarted","Data":"8df082995828aed6a7175bebbb77e69bc7047b9b6e0fd4bdf78dce6766443465"} Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.118243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"94da2ded-002e-4aa6-9828-404bee84c146","Type":"ContainerStarted","Data":"2c509854892016222cad13899eb43eee1142843f607470d93ce9615bca7fc20a"} Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.140679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.14065428 podStartE2EDuration="2.14065428s" podCreationTimestamp="2026-02-16 13:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:53.131358108 +0000 UTC m=+1200.507706849" watchObservedRunningTime="2026-02-16 13:12:53.14065428 +0000 UTC m=+1200.517002991" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309224 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309580 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309829 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.309848 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.312698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.312769 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.508895 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.510821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.536394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607525 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.607771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709595 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709794 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709905 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.709951 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.710018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711276 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711429 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.711930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.730500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"dnsmasq-dns-5c7b6c5df9-kdzv4\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:53 crc kubenswrapper[4740]: I0216 13:12:53.867865 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:54 crc kubenswrapper[4740]: I0216 13:12:54.347771 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152095 4740 generic.go:334] "Generic (PLEG): container finished" podID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" exitCode=0 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152204 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831"} Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.152533 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerStarted","Data":"36059a67ae20b43daa15e0427481604179329ee78b6747e45aa5695fe8ffaa0e"} Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.476624 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478183 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" containerID="cri-o://b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478175 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" containerID="cri-o://705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.478361 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" containerID="cri-o://df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.477290 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" containerID="cri-o://0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" gracePeriod=30 Feb 16 13:12:55 crc kubenswrapper[4740]: I0216 13:12:55.677711 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.162497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerStarted","Data":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.162852 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165459 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165493 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" exitCode=2 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165501 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165509 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerID="0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" exitCode=0 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165677 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" containerID="cri-o://e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" gracePeriod=30 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.165979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166035 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141"} Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.166083 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" containerID="cri-o://b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" gracePeriod=30 Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.186777 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" podStartSLOduration=3.186752685 podStartE2EDuration="3.186752685s" podCreationTimestamp="2026-02-16 13:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:12:56.178789405 +0000 UTC m=+1203.555138146" watchObservedRunningTime="2026-02-16 13:12:56.186752685 +0000 UTC m=+1203.563101406" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.290132 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360408 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360626 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360656 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360908 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.360953 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") pod \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\" (UID: \"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c\") " Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.363429 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.365495 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.366790 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts" (OuterVolumeSpecName: "scripts") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.368966 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f" (OuterVolumeSpecName: "kube-api-access-ck59f") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "kube-api-access-ck59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.399402 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.440865 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463064 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463097 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck59f\" (UniqueName: \"kubernetes.io/projected/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-kube-api-access-ck59f\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463108 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463118 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463130 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.463138 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.473787 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.485023 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.488248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data" (OuterVolumeSpecName: "config-data") pod "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" (UID: "c7722219-9b84-4adf-bf81-c4ac8a0d9d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.564700 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:56 crc kubenswrapper[4740]: I0216 13:12:56.564749 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7722219-9b84-4adf-bf81-c4ac8a0d9d2c","Type":"ContainerDied","Data":"b0fc28209f10a3e996f2004b2935e7f7f0d3702abe3519ffcea0b77c33e5a593"} Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.177639 4740 scope.go:117] "RemoveContainer" containerID="b4b0ed2b2eabee1e3146c4ac0b0d7bb3dfee892915eda3f99bf8c08572c0a096" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.179700 4740 generic.go:334] "Generic (PLEG): container finished" podID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" exitCode=143 Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.179781 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.212949 4740 scope.go:117] "RemoveContainer" containerID="705365a97839558eb2816feed6e761a8ab9d12cd4c2bcbf3179208f1f08c7614" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.217107 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.229607 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.246204 4740 scope.go:117] "RemoveContainer" containerID="df89fbb68337555ca32e22cc3c544316db4797904bb59c1bc4b6fb2d4bd470a9" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256125 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256615 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256636 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256665 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256676 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256689 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256696 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.256711 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256720 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256942 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-central-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256971 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="proxy-httpd" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.256987 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="ceilometer-notification-agent" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.257004 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" containerName="sg-core" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.259337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261903 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.261970 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.274448 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.290824 4740 scope.go:117] "RemoveContainer" containerID="0e9acf49c8c57a0192695f150464693337d9eca058d4bead53eaefc925c92141" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.304511 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7722219-9b84-4adf-bf81-c4ac8a0d9d2c" path="/var/lib/kubelet/pods/c7722219-9b84-4adf-bf81-c4ac8a0d9d2c/volumes" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.323176 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:57 crc kubenswrapper[4740]: E0216 13:12:57.323863 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-qqsmc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-qqsmc log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380491 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380677 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380716 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380789 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380902 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.380936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.482848 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.483869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.484358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.485079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.487555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.487729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.488345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.496804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.497624 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:57 crc kubenswrapper[4740]: I0216 13:12:57.498213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"ceilometer-0\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.191072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.205541 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299684 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299924 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299953 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.299971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") pod \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\" (UID: \"74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9\") " Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.301218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.301691 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.305091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.305124 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306030 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data" (OuterVolumeSpecName: "config-data") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306063 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts" (OuterVolumeSpecName: "scripts") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.306465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc" (OuterVolumeSpecName: "kube-api-access-qqsmc") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "kube-api-access-qqsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.327567 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" (UID: "74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402868 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402910 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402924 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqsmc\" (UniqueName: \"kubernetes.io/projected/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-kube-api-access-qqsmc\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402943 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402956 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402970 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402984 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:58 crc kubenswrapper[4740]: I0216 13:12:58.402996 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.198705 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.252241 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.263706 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.319278 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9" path="/var/lib/kubelet/pods/74a2f9d2-6fdf-4e0a-9a7d-2f5c2507d8f9/volumes" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.335798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.338927 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.342940 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.343519 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.344399 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.350544 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.420952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421273 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.421324 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522716 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522761 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.522847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.523083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-run-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.523929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcfe5822-8cae-409c-8224-b1ce2c452e02-log-httpd\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.527635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.528221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-scripts\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.528459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.529084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-config-data\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.529619 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfe5822-8cae-409c-8224-b1ce2c452e02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.560733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmf8\" (UniqueName: \"kubernetes.io/projected/dcfe5822-8cae-409c-8224-b1ce2c452e02-kube-api-access-6pmf8\") pod \"ceilometer-0\" (UID: \"dcfe5822-8cae-409c-8224-b1ce2c452e02\") " pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.763266 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.767244 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827857 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.827971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.828070 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") pod \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\" (UID: \"63770e5c-58d7-48e9-b7dc-b0ed093c5a01\") " Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.829093 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs" (OuterVolumeSpecName: "logs") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.833730 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g" (OuterVolumeSpecName: "kube-api-access-nvp2g") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "kube-api-access-nvp2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.860032 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.870052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data" (OuterVolumeSpecName: "config-data") pod "63770e5c-58d7-48e9-b7dc-b0ed093c5a01" (UID: "63770e5c-58d7-48e9-b7dc-b0ed093c5a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930649 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930691 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvp2g\" (UniqueName: \"kubernetes.io/projected/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-kube-api-access-nvp2g\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930706 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:12:59 crc kubenswrapper[4740]: I0216 13:12:59.930718 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63770e5c-58d7-48e9-b7dc-b0ed093c5a01-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210224 4740 generic.go:334] "Generic (PLEG): container finished" podID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" exitCode=0 Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63770e5c-58d7-48e9-b7dc-b0ed093c5a01","Type":"ContainerDied","Data":"c4d683b87b8053b36a187d093b9f0ec534b07751c618fc2b202e8bf53d44bb7c"} Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210585 4740 scope.go:117] "RemoveContainer" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.210304 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.241061 4740 scope.go:117] "RemoveContainer" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.251960 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.270131 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.270736 4740 scope.go:117] "RemoveContainer" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.271230 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": container with ID starting with b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686 not found: ID does not exist" containerID="b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.271267 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686"} err="failed to get container status \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": rpc error: code = NotFound desc = could not find container \"b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686\": container with ID starting with b3a1cb8cc644dbb1a589f2d867323e7774d8e4f22541aa5f64b253abe9479686 not found: ID does not exist" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.271292 4740 scope.go:117] "RemoveContainer" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.272513 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": container with ID starting with e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b not found: ID does not exist" containerID="e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.272552 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b"} err="failed to get container status \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": rpc error: code = NotFound desc = could not find container \"e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b\": container with ID starting with e5687ebcb55a4e65ab0eaf07b5374fa3b89179763ec9abf1ba508243dfe5249b not found: ID does not exist" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.284368 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.285082 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285178 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: E0216 13:13:00.285282 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285488 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285791 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-api" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.285929 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" containerName="nova-api-log" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.287242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292372 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.292920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.296907 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.320350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:13:00 crc kubenswrapper[4740]: W0216 13:13:00.325953 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcfe5822_8cae_409c_8224_b1ce2c452e02.slice/crio-4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d WatchSource:0}: Error finding container 4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d: Status 404 returned error can't find the container with id 4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.338730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.338909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.339664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.441937 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.442396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.446847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.448558 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.449187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.449783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.466560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"nova-api-0\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " pod="openstack/nova-api-0" Feb 16 13:13:00 crc kubenswrapper[4740]: I0216 13:13:00.606917 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.059107 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:01 crc kubenswrapper[4740]: W0216 13:13:01.061004 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74e5f1b3_dadf_447d_b4c7_6c7274acb380.slice/crio-79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa WatchSource:0}: Error finding container 79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa: Status 404 returned error can't find the container with id 79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.222747 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"5a681abd671eee6caab206fba914b067565558873fcb6b5018e014d280fd8723"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.222803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"4a9b84755aba030c85041822dd15d25015d1fb759bd227189a5abf203affe94d"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.225442 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.225497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa"} Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.295933 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63770e5c-58d7-48e9-b7dc-b0ed093c5a01" path="/var/lib/kubelet/pods/63770e5c-58d7-48e9-b7dc-b0ed093c5a01/volumes" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.485384 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:01 crc kubenswrapper[4740]: I0216 13:13:01.509148 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.239099 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerStarted","Data":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.241731 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"1305e976ed730967af7a5cc045b686f425f616438f8268e06c449fac060878c1"} Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.272287 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.272268754 podStartE2EDuration="2.272268754s" podCreationTimestamp="2026-02-16 13:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:02.26670794 +0000 UTC m=+1209.643056681" watchObservedRunningTime="2026-02-16 13:13:02.272268754 +0000 UTC m=+1209.648617475" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.277350 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.616596 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.618896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.621502 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.622578 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.629364 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709898 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.709917 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.812511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.816788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.816790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.831440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.833349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"nova-cell1-cell-mapping-l9964\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:02 crc kubenswrapper[4740]: I0216 13:13:02.942506 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.254911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"033eacf45cba8509b3bba630b0a035fdf750f10e02b2a3370f3aceb052ea3d1b"} Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.408227 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.872981 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.950312 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:03 crc kubenswrapper[4740]: I0216 13:13:03.950584 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" containerID="cri-o://8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" gracePeriod=10 Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.300508 4740 generic.go:334] "Generic (PLEG): container finished" podID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerID="8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" exitCode=0 Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.300782 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.309158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerStarted","Data":"88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.309206 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerStarted","Data":"7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.324343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcfe5822-8cae-409c-8224-b1ce2c452e02","Type":"ContainerStarted","Data":"804de6233210445ee2ac23f44d8c776698377d2f607e29440a0c4e15a1e65b5f"} Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.324861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.333408 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l9964" podStartSLOduration=2.333387395 podStartE2EDuration="2.333387395s" podCreationTimestamp="2026-02-16 13:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:04.330085392 +0000 UTC m=+1211.706434113" watchObservedRunningTime="2026-02-16 13:13:04.333387395 +0000 UTC m=+1211.709736106" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.373650 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.867817362 podStartE2EDuration="5.373634189s" podCreationTimestamp="2026-02-16 13:12:59 +0000 UTC" firstStartedPulling="2026-02-16 13:13:00.328362885 +0000 UTC m=+1207.704711606" lastFinishedPulling="2026-02-16 13:13:03.834179712 +0000 UTC m=+1211.210528433" observedRunningTime="2026-02-16 13:13:04.353769086 +0000 UTC m=+1211.730117817" watchObservedRunningTime="2026-02-16 13:13:04.373634189 +0000 UTC m=+1211.749982910" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.412325 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548426 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548593 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548708 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.548754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") pod \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\" (UID: \"56c1f9b3-2d24-4a15-a9ed-1e580d07368d\") " Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.576269 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75" (OuterVolumeSpecName: "kube-api-access-b4b75") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "kube-api-access-b4b75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.600078 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config" (OuterVolumeSpecName: "config") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.605416 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.617337 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.621320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.627165 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56c1f9b3-2d24-4a15-a9ed-1e580d07368d" (UID: "56c1f9b3-2d24-4a15-a9ed-1e580d07368d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.650638 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4b75\" (UniqueName: \"kubernetes.io/projected/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-kube-api-access-b4b75\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.650931 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651017 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651114 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651200 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:04 crc kubenswrapper[4740]: I0216 13:13:04.651277 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c1f9b3-2d24-4a15-a9ed-1e580d07368d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.342066 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.342952 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-zmbdr" event={"ID":"56c1f9b3-2d24-4a15-a9ed-1e580d07368d","Type":"ContainerDied","Data":"0f5c11f9e9c71c25a9a0ab7c58bca8206f5d89e3f123e7f468eccce30de6f1c4"} Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.344344 4740 scope.go:117] "RemoveContainer" containerID="8afebe6e22dd2d5cabfba1e3897a579c79457dc251f6bc22ca4178b93bc80a65" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.368897 4740 scope.go:117] "RemoveContainer" containerID="427a4347253b1f218cb29a2e5fa718786b3338a5e82bf3ce06ee2ab5a6254207" Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.380379 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:05 crc kubenswrapper[4740]: I0216 13:13:05.388239 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-zmbdr"] Feb 16 13:13:07 crc kubenswrapper[4740]: I0216 13:13:07.292380 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" path="/var/lib/kubelet/pods/56c1f9b3-2d24-4a15-a9ed-1e580d07368d/volumes" Feb 16 13:13:09 crc kubenswrapper[4740]: I0216 13:13:09.386395 4740 generic.go:334] "Generic (PLEG): container finished" podID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerID="88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29" exitCode=0 Feb 16 13:13:09 crc kubenswrapper[4740]: I0216 13:13:09.386520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerDied","Data":"88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29"} Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.608072 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.608408 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.798551 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875652 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.875872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") pod \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\" (UID: \"798bf8e1-4a33-48eb-bbb3-9be8d38027de\") " Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.882459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts" (OuterVolumeSpecName: "scripts") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.884216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27" (OuterVolumeSpecName: "kube-api-access-jbh27") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "kube-api-access-jbh27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.905260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data" (OuterVolumeSpecName: "config-data") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.911031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798bf8e1-4a33-48eb-bbb3-9be8d38027de" (UID: "798bf8e1-4a33-48eb-bbb3-9be8d38027de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.981977 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982036 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982051 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798bf8e1-4a33-48eb-bbb3-9be8d38027de-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:10 crc kubenswrapper[4740]: I0216 13:13:10.982065 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbh27\" (UniqueName: \"kubernetes.io/projected/798bf8e1-4a33-48eb-bbb3-9be8d38027de-kube-api-access-jbh27\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406756 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9964" event={"ID":"798bf8e1-4a33-48eb-bbb3-9be8d38027de","Type":"ContainerDied","Data":"7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce"} Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406806 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7890daaec85b079293eb53769939fb89e5c479ff49cd46a21cd3a2bb42a4d2ce" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.406937 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9964" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.598319 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.599078 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" containerID="cri-o://f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.599014 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" containerID="cri-o://b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.608184 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.608366 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.619664 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.619920 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" containerID="cri-o://4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634148 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634379 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" containerID="cri-o://02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" gracePeriod=30 Feb 16 13:13:11 crc kubenswrapper[4740]: I0216 13:13:11.634474 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" containerID="cri-o://46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" gracePeriod=30 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.448845 4740 generic.go:334] "Generic (PLEG): container finished" podID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" exitCode=143 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.449211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.451929 4740 generic.go:334] "Generic (PLEG): container finished" podID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" exitCode=143 Feb 16 13:13:12 crc kubenswrapper[4740]: I0216 13:13:12.451962 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.966259 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.967686 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.969450 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:13:12 crc kubenswrapper[4740]: E0216 13:13:12.969506 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:14 crc kubenswrapper[4740]: I0216 13:13:14.781225 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:39742->10.217.0.198:8775: read: connection reset by peer" Feb 16 13:13:14 crc kubenswrapper[4740]: I0216 13:13:14.781290 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:39740->10.217.0.198:8775: read: connection reset by peer" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.248302 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.378533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379053 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379090 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.379219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") pod \"a5aba68d-a690-4494-84bd-ccf1ef18592b\" (UID: \"a5aba68d-a690-4494-84bd-ccf1ef18592b\") " Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.381950 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs" (OuterVolumeSpecName: "logs") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.392084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj" (OuterVolumeSpecName: "kube-api-access-jfmmj") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "kube-api-access-jfmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.409778 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.414905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data" (OuterVolumeSpecName: "config-data") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.450782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a5aba68d-a690-4494-84bd-ccf1ef18592b" (UID: "a5aba68d-a690-4494-84bd-ccf1ef18592b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481566 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481604 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfmmj\" (UniqueName: \"kubernetes.io/projected/a5aba68d-a690-4494-84bd-ccf1ef18592b-kube-api-access-jfmmj\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481614 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481624 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5aba68d-a690-4494-84bd-ccf1ef18592b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.481633 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5aba68d-a690-4494-84bd-ccf1ef18592b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493134 4740 generic.go:334] "Generic (PLEG): container finished" podID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" exitCode=0 Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493178 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a5aba68d-a690-4494-84bd-ccf1ef18592b","Type":"ContainerDied","Data":"ec2fb41ee7bc0398ae3bc21b2bd16713c705f380c7a143d9071f0092702463d2"} Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.493265 4740 scope.go:117] "RemoveContainer" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.526782 4740 scope.go:117] "RemoveContainer" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.538652 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.545928 4740 scope.go:117] "RemoveContainer" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.546271 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": container with ID starting with 46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece not found: ID does not exist" containerID="46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.546301 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece"} err="failed to get container status \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": rpc error: code = NotFound desc = could not find container \"46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece\": container with ID starting with 46a652528844c437b8638001d0cb2078b27622ae9399a5ae1e4990302903aece not found: ID does not exist" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.546323 4740 scope.go:117] "RemoveContainer" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.546514 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": container with ID starting with 02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04 not found: ID does not exist" containerID="02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.550490 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04"} err="failed to get container status \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": rpc error: code = NotFound desc = could not find container \"02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04\": container with ID starting with 02d4aa2ccb595b552bca5175cc8dcee2200f9104132be84303c3b422cf32eb04 not found: ID does not exist" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.581315 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.593728 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594219 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594236 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594254 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594261 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594283 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594289 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594304 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594310 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: E0216 13:13:15.594323 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="init" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594329 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="init" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594506 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-log" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594518 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" containerName="nova-metadata-metadata" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594528 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c1f9b3-2d24-4a15-a9ed-1e580d07368d" containerName="dnsmasq-dns" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.594540 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" containerName="nova-manage" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.597366 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.600300 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.600352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.602034 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685622 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685699 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.685882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.686116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.788504 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/722ecd51-0827-457b-8d5c-246a1a57e24a-logs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.789521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.792843 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.795100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.795489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/722ecd51-0827-457b-8d5c-246a1a57e24a-config-data\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.806578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbdc\" (UniqueName: \"kubernetes.io/projected/722ecd51-0827-457b-8d5c-246a1a57e24a-kube-api-access-lvbdc\") pod \"nova-metadata-0\" (UID: \"722ecd51-0827-457b-8d5c-246a1a57e24a\") " pod="openstack/nova-metadata-0" Feb 16 13:13:15 crc kubenswrapper[4740]: I0216 13:13:15.914636 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:13:16 crc kubenswrapper[4740]: I0216 13:13:16.397699 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:13:16 crc kubenswrapper[4740]: W0216 13:13:16.404516 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722ecd51_0827_457b_8d5c_246a1a57e24a.slice/crio-2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8 WatchSource:0}: Error finding container 2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8: Status 404 returned error can't find the container with id 2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8 Feb 16 13:13:16 crc kubenswrapper[4740]: I0216 13:13:16.509862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"2041102166c83b93a81bbf2f5cc733694f04c14aa826f5f3c849687ed5abfdd8"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.216946 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.294915 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5aba68d-a690-4494-84bd-ccf1ef18592b" path="/var/lib/kubelet/pods/a5aba68d-a690-4494-84bd-ccf1ef18592b/volumes" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315585 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315680 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.315769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") pod \"e8eb17c9-d042-4220-bc24-e56054e5be4d\" (UID: \"e8eb17c9-d042-4220-bc24-e56054e5be4d\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.332058 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl" (OuterVolumeSpecName: "kube-api-access-xbbdl") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "kube-api-access-xbbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.349904 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data" (OuterVolumeSpecName: "config-data") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.355415 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8eb17c9-d042-4220-bc24-e56054e5be4d" (UID: "e8eb17c9-d042-4220-bc24-e56054e5be4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418431 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418469 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbbdl\" (UniqueName: \"kubernetes.io/projected/e8eb17c9-d042-4220-bc24-e56054e5be4d-kube-api-access-xbbdl\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.418484 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8eb17c9-d042-4220-bc24-e56054e5be4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.429661 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.519940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520308 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520354 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520380 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520413 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.520882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") pod \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\" (UID: \"74e5f1b3-dadf-447d-b4c7-6c7274acb380\") " Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521162 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs" (OuterVolumeSpecName: "logs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521500 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74e5f1b3-dadf-447d-b4c7-6c7274acb380-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521662 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"2395b9e17cec64c8014d0a7c6d71db13ce15b31ea3570efd3f817ea07eceb122"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.521690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"722ecd51-0827-457b-8d5c-246a1a57e24a","Type":"ContainerStarted","Data":"bc12862590c5214eec7e789ebe6acd368a66e5e159fc75baddb6b2b853f21d49"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.525983 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2" (OuterVolumeSpecName: "kube-api-access-kmwj2") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "kube-api-access-kmwj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531662 4740 generic.go:334] "Generic (PLEG): container finished" podID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" exitCode=0 Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"74e5f1b3-dadf-447d-b4c7-6c7274acb380","Type":"ContainerDied","Data":"79ef3154c2df0c308700bb21d2bcd8c8251a70b34c1f3505854cc7ce16a8e1aa"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531804 4740 scope.go:117] "RemoveContainer" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.531933 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542718 4740 generic.go:334] "Generic (PLEG): container finished" podID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" exitCode=0 Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542762 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerDied","Data":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542786 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8eb17c9-d042-4220-bc24-e56054e5be4d","Type":"ContainerDied","Data":"77398076beb6a4d90d4fcff474e340ac86b26fe45053c777aa70824a5ad08261"} Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.542850 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.550478 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.551277 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.551252122 podStartE2EDuration="2.551252122s" podCreationTimestamp="2026-02-16 13:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:17.547714212 +0000 UTC m=+1224.924062943" watchObservedRunningTime="2026-02-16 13:13:17.551252122 +0000 UTC m=+1224.927600843" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.556905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data" (OuterVolumeSpecName: "config-data") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.570788 4740 scope.go:117] "RemoveContainer" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.589580 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.600964 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.601862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.609759 4740 scope.go:117] "RemoveContainer" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.610297 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": container with ID starting with f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7 not found: ID does not exist" containerID="f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610333 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7"} err="failed to get container status \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": rpc error: code = NotFound desc = could not find container \"f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7\": container with ID starting with f5a02a57a47d695ec51aea7780c62db8476620150d01b4e88fcf588190852fd7 not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610358 4740 scope.go:117] "RemoveContainer" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.610869 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": container with ID starting with b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b not found: ID does not exist" containerID="b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610893 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b"} err="failed to get container status \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": rpc error: code = NotFound desc = could not find container \"b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b\": container with ID starting with b60a24f2100ffdeb35dc5d7f2bb00b7dc6a6719a839dd5cb4bcb98a0a1cb326b not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.610909 4740 scope.go:117] "RemoveContainer" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.614998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74e5f1b3-dadf-447d-b4c7-6c7274acb380" (UID: "74e5f1b3-dadf-447d-b4c7-6c7274acb380"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617223 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617743 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617767 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617781 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617789 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.617838 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.617847 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618077 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-api" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618101 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" containerName="nova-api-log" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618115 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" containerName="nova-scheduler-scheduler" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.618865 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.625009 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.625602 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626776 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626793 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626802 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626851 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwj2\" (UniqueName: \"kubernetes.io/projected/74e5f1b3-dadf-447d-b4c7-6c7274acb380-kube-api-access-kmwj2\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.626859 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74e5f1b3-dadf-447d-b4c7-6c7274acb380-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.659659 4740 scope.go:117] "RemoveContainer" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: E0216 13:13:17.660649 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": container with ID starting with 4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff not found: ID does not exist" containerID="4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.660678 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff"} err="failed to get container status \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": rpc error: code = NotFound desc = could not find container \"4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff\": container with ID starting with 4e4f55a80d7c86afc317258a603b19db7deb4769076dc55ebc02ecc438b10cff not found: ID does not exist" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.729925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.729998 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.730139 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.834392 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.839352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-config-data\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.840083 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.856685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxl97\" (UniqueName: \"kubernetes.io/projected/e3ba9a19-9826-4c43-9907-8cd8f1a4272a-kube-api-access-xxl97\") pod \"nova-scheduler-0\" (UID: \"e3ba9a19-9826-4c43-9907-8cd8f1a4272a\") " pod="openstack/nova-scheduler-0" Feb 16 13:13:17 crc kubenswrapper[4740]: I0216 13:13:17.940403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.061617 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.076506 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.097863 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.099356 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105483 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105579 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.105491 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.140512 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.245671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.246730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.246862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.247524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349302 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349350 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.349473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.350458 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ee2c81-2a61-476c-9731-b94363864633-logs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.354273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.355081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-config-data\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.357345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.357457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ee2c81-2a61-476c-9731-b94363864633-public-tls-certs\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.371614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvn2l\" (UniqueName: \"kubernetes.io/projected/56ee2c81-2a61-476c-9731-b94363864633-kube-api-access-rvn2l\") pod \"nova-api-0\" (UID: \"56ee2c81-2a61-476c-9731-b94363864633\") " pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.442151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.510502 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:13:18 crc kubenswrapper[4740]: W0216 13:13:18.512329 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ba9a19_9826_4c43_9907_8cd8f1a4272a.slice/crio-d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b WatchSource:0}: Error finding container d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b: Status 404 returned error can't find the container with id d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.559050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3ba9a19-9826-4c43-9907-8cd8f1a4272a","Type":"ContainerStarted","Data":"d4b1fc8c32363df257b82c63a63874000da884f4d3fb2910ca85d0226789f14b"} Feb 16 13:13:18 crc kubenswrapper[4740]: I0216 13:13:18.892199 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.296232 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e5f1b3-dadf-447d-b4c7-6c7274acb380" path="/var/lib/kubelet/pods/74e5f1b3-dadf-447d-b4c7-6c7274acb380/volumes" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.297631 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8eb17c9-d042-4220-bc24-e56054e5be4d" path="/var/lib/kubelet/pods/e8eb17c9-d042-4220-bc24-e56054e5be4d/volumes" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"64b6631e907eadec0d22847a5acabac6d6e1743026eed159a7ef3d71abf9775e"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"b6b85e2869d947a24d8d9dac79182b7517ad5b50ad4514bd0fcc077455e12c61"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.572984 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56ee2c81-2a61-476c-9731-b94363864633","Type":"ContainerStarted","Data":"765e1dd53679a6c9a1605ed5432907100f51c8fa3b50d221701b8b937707aee7"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.575593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e3ba9a19-9826-4c43-9907-8cd8f1a4272a","Type":"ContainerStarted","Data":"eb1d3be1e4ce5d680e2e131b4564ff75d7fbb999f8ad50b5291848ac4fafff05"} Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.602888 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.6028709349999999 podStartE2EDuration="1.602870935s" podCreationTimestamp="2026-02-16 13:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:19.595309677 +0000 UTC m=+1226.971658398" watchObservedRunningTime="2026-02-16 13:13:19.602870935 +0000 UTC m=+1226.979219646" Feb 16 13:13:19 crc kubenswrapper[4740]: I0216 13:13:19.620086 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.620071685 podStartE2EDuration="2.620071685s" podCreationTimestamp="2026-02-16 13:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:13:19.617250346 +0000 UTC m=+1226.993599067" watchObservedRunningTime="2026-02-16 13:13:19.620071685 +0000 UTC m=+1226.996420406" Feb 16 13:13:20 crc kubenswrapper[4740]: I0216 13:13:20.915723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:13:20 crc kubenswrapper[4740]: I0216 13:13:20.916076 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:13:22 crc kubenswrapper[4740]: I0216 13:13:22.941181 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:13:25 crc kubenswrapper[4740]: I0216 13:13:25.915365 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:13:25 crc kubenswrapper[4740]: I0216 13:13:25.915879 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:13:26 crc kubenswrapper[4740]: I0216 13:13:26.928073 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722ecd51-0827-457b-8d5c-246a1a57e24a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:26 crc kubenswrapper[4740]: I0216 13:13:26.928479 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="722ecd51-0827-457b-8d5c-246a1a57e24a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:27 crc kubenswrapper[4740]: I0216 13:13:27.940634 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:13:27 crc kubenswrapper[4740]: I0216 13:13:27.966012 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.443492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.443532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:13:28 crc kubenswrapper[4740]: I0216 13:13:28.711175 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.496077 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ee2c81-2a61-476c-9731-b94363864633" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.496209 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56ee2c81-2a61-476c-9731-b94363864633" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:13:29 crc kubenswrapper[4740]: I0216 13:13:29.770684 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.921461 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.926135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:13:35 crc kubenswrapper[4740]: I0216 13:13:35.930896 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:13:36 crc kubenswrapper[4740]: I0216 13:13:36.746890 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.449553 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450033 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450349 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.450384 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.463297 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:13:38 crc kubenswrapper[4740]: I0216 13:13:38.463464 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:13:45 crc kubenswrapper[4740]: I0216 13:13:45.574910 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:13:45 crc kubenswrapper[4740]: I0216 13:13:45.575026 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:13:46 crc kubenswrapper[4740]: I0216 13:13:46.535844 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:47 crc kubenswrapper[4740]: I0216 13:13:47.395203 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:50 crc kubenswrapper[4740]: I0216 13:13:50.743674 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" containerID="cri-o://80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" gracePeriod=604796 Feb 16 13:13:51 crc kubenswrapper[4740]: I0216 13:13:51.939996 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" containerID="cri-o://3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" gracePeriod=604796 Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.364474 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477324 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477661 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477705 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477829 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477922 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.477989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") pod \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\" (UID: \"ba652ec6-7bab-4f13-836b-35b3c7c8325f\") " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.479414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.480235 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.480416 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.485170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt" (OuterVolumeSpecName: "kube-api-access-xtjzt") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "kube-api-access-xtjzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.486939 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info" (OuterVolumeSpecName: "pod-info") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.527324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.527775 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.528282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data" (OuterVolumeSpecName: "config-data") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.530078 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.563013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf" (OuterVolumeSpecName: "server-conf") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583865 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ba652ec6-7bab-4f13-836b-35b3c7c8325f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583893 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583902 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583911 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583922 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtjzt\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-kube-api-access-xtjzt\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583930 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ba652ec6-7bab-4f13-836b-35b3c7c8325f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583952 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583961 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583969 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.583977 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba652ec6-7bab-4f13-836b-35b3c7c8325f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.617535 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ba652ec6-7bab-4f13-836b-35b3c7c8325f" (UID: "ba652ec6-7bab-4f13-836b-35b3c7c8325f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.620237 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.686167 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.686214 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ba652ec6-7bab-4f13-836b-35b3c7c8325f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935078 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" exitCode=0 Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ba652ec6-7bab-4f13-836b-35b3c7c8325f","Type":"ContainerDied","Data":"934ceceace7365e9c0090e9a012126311d06e3cf25d1f4641361df1885a08c73"} Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935170 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.935197 4740 scope.go:117] "RemoveContainer" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.971027 4740 scope.go:117] "RemoveContainer" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:57 crc kubenswrapper[4740]: I0216 13:13:57.994726 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.006308 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035551 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.035957 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035972 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.035986 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="setup-container" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.035993 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="setup-container" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.036165 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" containerName="rabbitmq" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.037080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.042297 4740 scope.go:117] "RemoveContainer" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046405 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.046630 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": container with ID starting with 80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088 not found: ID does not exist" containerID="80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046661 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088"} err="failed to get container status \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": rpc error: code = NotFound desc = could not find container \"80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088\": container with ID starting with 80c80ad53deaaa9f55f1e59e5467ac44d73fe60408eb3e774fe1001c25a48088 not found: ID does not exist" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046684 4740 scope.go:117] "RemoveContainer" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046886 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.046948 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047000 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c72m7" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047074 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047110 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 13:13:58 crc kubenswrapper[4740]: E0216 13:13:58.047183 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": container with ID starting with 63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133 not found: ID does not exist" containerID="63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047205 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133"} err="failed to get container status \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": rpc error: code = NotFound desc = could not find container \"63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133\": container with ID starting with 63470da005fbfa31036daa56f08df2934fc4eae280e23a70d6ea5c8125277133 not found: ID does not exist" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.047530 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.060265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.196603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197093 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197141 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197267 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197302 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197348 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.197378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299363 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299500 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.299967 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.300371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.301547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304616 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-config-data\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6ad16000-fb9f-4231-91fe-239907bba675-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.304959 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.306313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6ad16000-fb9f-4231-91fe-239907bba675-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.306701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.307179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6ad16000-fb9f-4231-91fe-239907bba675-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.313365 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.319457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzp44\" (UniqueName: \"kubernetes.io/projected/6ad16000-fb9f-4231-91fe-239907bba675-kube-api-access-mzp44\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.347343 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"6ad16000-fb9f-4231-91fe-239907bba675\") " pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.469970 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.510498 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.604663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605373 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605448 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605573 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605925 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.605999 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") pod \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\" (UID: \"67441c1a-f0ea-4873-bfe7-d1b25caa58a2\") " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606378 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.606566 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.607184 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.607211 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.608493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.610537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff" (OuterVolumeSpecName: "kube-api-access-8j2ff") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "kube-api-access-8j2ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.610578 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.611210 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.612052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.615756 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.647403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data" (OuterVolumeSpecName: "config-data") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.672988 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709092 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709122 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709132 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709143 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709155 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2ff\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-kube-api-access-8j2ff\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709166 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709195 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.709207 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.778303 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "67441c1a-f0ea-4873-bfe7-d1b25caa58a2" (UID: "67441c1a-f0ea-4873-bfe7-d1b25caa58a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.784384 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.811181 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67441c1a-f0ea-4873-bfe7-d1b25caa58a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.811593 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944843 4740 generic.go:334] "Generic (PLEG): container finished" podID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" exitCode=0 Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944982 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"67441c1a-f0ea-4873-bfe7-d1b25caa58a2","Type":"ContainerDied","Data":"57a77e39696732ba0c2e89d52e10f74cd6c56edebaba2ddd54807982f361b511"} Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.944978 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.945026 4740 scope.go:117] "RemoveContainer" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.991765 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:58 crc kubenswrapper[4740]: I0216 13:13:58.992951 4740 scope.go:117] "RemoveContainer" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.006920 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035313 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.035726 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035741 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.035770 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="setup-container" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035776 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="setup-container" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.035949 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" containerName="rabbitmq" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.037122 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.040516 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042166 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042313 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042315 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042573 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-x99bs" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.042954 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.044775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.069389 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078278 4740 scope.go:117] "RemoveContainer" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.078841 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": container with ID starting with 3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1 not found: ID does not exist" containerID="3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078874 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1"} err="failed to get container status \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": rpc error: code = NotFound desc = could not find container \"3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1\": container with ID starting with 3260cd43a604e94038228bcf57ad5a3c3540d539bc7cb93256e9d85cb2fd5fd1 not found: ID does not exist" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.078897 4740 scope.go:117] "RemoveContainer" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: E0216 13:13:59.079149 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": container with ID starting with ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109 not found: ID does not exist" containerID="ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.079171 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109"} err="failed to get container status \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": rpc error: code = NotFound desc = could not find container \"ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109\": container with ID starting with ee36b0236a31dd2085afdbd4efd44cee01a019a563fd635f693df75280769109 not found: ID does not exist" Feb 16 13:13:59 crc kubenswrapper[4740]: W0216 13:13:59.101937 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad16000_fb9f_4231_91fe_239907bba675.slice/crio-a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9 WatchSource:0}: Error finding container a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9: Status 404 returned error can't find the container with id a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9 Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119567 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119617 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.119641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221410 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221469 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221555 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.221867 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222128 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222155 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.222351 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.223047 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.224236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.224260 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05abd29a-2c3c-4129-9afd-859a65e1ef45-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226272 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05abd29a-2c3c-4129-9afd-859a65e1ef45-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.226852 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.228020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05abd29a-2c3c-4129-9afd-859a65e1ef45-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.240437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gkw\" (UniqueName: \"kubernetes.io/projected/05abd29a-2c3c-4129-9afd-859a65e1ef45-kube-api-access-w2gkw\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.272574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"05abd29a-2c3c-4129-9afd-859a65e1ef45\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.309446 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67441c1a-f0ea-4873-bfe7-d1b25caa58a2" path="/var/lib/kubelet/pods/67441c1a-f0ea-4873-bfe7-d1b25caa58a2/volumes" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.310242 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba652ec6-7bab-4f13-836b-35b3c7c8325f" path="/var/lib/kubelet/pods/ba652ec6-7bab-4f13-836b-35b3c7c8325f/volumes" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.543259 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.657634 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.663299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.671328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.677888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734212 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.734925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836658 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836681 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.836737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837760 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.837904 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.838481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.853782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"dnsmasq-dns-5576978c7c-92s29\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:13:59 crc kubenswrapper[4740]: I0216 13:13:59.964087 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"a06a8d5419a0b05d5daec34c9e82615d117783713b489d406194f00eafb2cbe9"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.028410 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.086018 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:14:00 crc kubenswrapper[4740]: W0216 13:14:00.092694 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05abd29a_2c3c_4129_9afd_859a65e1ef45.slice/crio-1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162 WatchSource:0}: Error finding container 1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162: Status 404 returned error can't find the container with id 1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162 Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.514458 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:00 crc kubenswrapper[4740]: W0216 13:14:00.539981 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59421bc1_357f_46f1_857a_57d1562762dc.slice/crio-039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec WatchSource:0}: Error finding container 039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec: Status 404 returned error can't find the container with id 039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974043 4740 generic.go:334] "Generic (PLEG): container finished" podID="59421bc1-357f-46f1-857a-57d1562762dc" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" exitCode=0 Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.974415 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerStarted","Data":"039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.977594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"1f40e950f8460ef1c97bc5a37c7ced976ff86b13eddd890d6c2b72d4521f6162"} Feb 16 13:14:00 crc kubenswrapper[4740]: I0216 13:14:00.979050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb"} Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.991648 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerStarted","Data":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.991958 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:01 crc kubenswrapper[4740]: I0216 13:14:01.993671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95"} Feb 16 13:14:02 crc kubenswrapper[4740]: I0216 13:14:02.017749 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-92s29" podStartSLOduration=3.017720347 podStartE2EDuration="3.017720347s" podCreationTimestamp="2026-02-16 13:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:02.012137832 +0000 UTC m=+1269.388486553" watchObservedRunningTime="2026-02-16 13:14:02.017720347 +0000 UTC m=+1269.394069108" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.030904 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.101503 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.101868 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" containerID="cri-o://d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" gracePeriod=10 Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.238173 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.244451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.268314 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.280979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281125 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281407 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.281440 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385236 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385335 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.385432 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386252 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-config\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.386391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.387758 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc46d93a-139d-4125-9763-1093f49419a5-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.403670 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lb4\" (UniqueName: \"kubernetes.io/projected/dc46d93a-139d-4125-9763-1093f49419a5-kube-api-access-f6lb4\") pod \"dnsmasq-dns-8c6f6df99-5sfmf\" (UID: \"dc46d93a-139d-4125-9763-1093f49419a5\") " pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.597393 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.693463 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795350 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.795951 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796013 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.796108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") pod \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\" (UID: \"2d75f780-0301-46d4-aa0b-ecdf66b8bc21\") " Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.801313 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf" (OuterVolumeSpecName: "kube-api-access-4x6mf") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "kube-api-access-4x6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.844996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.850968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.854535 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.859192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config" (OuterVolumeSpecName: "config") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.860697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d75f780-0301-46d4-aa0b-ecdf66b8bc21" (UID: "2d75f780-0301-46d4-aa0b-ecdf66b8bc21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899015 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899056 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899069 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899083 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6mf\" (UniqueName: \"kubernetes.io/projected/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-kube-api-access-4x6mf\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899095 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:10 crc kubenswrapper[4740]: I0216 13:14:10.899105 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d75f780-0301-46d4-aa0b-ecdf66b8bc21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068080 4740 generic.go:334] "Generic (PLEG): container finished" podID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" exitCode=0 Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068190 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" event={"ID":"2d75f780-0301-46d4-aa0b-ecdf66b8bc21","Type":"ContainerDied","Data":"36059a67ae20b43daa15e0427481604179329ee78b6747e45aa5695fe8ffaa0e"} Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068212 4740 scope.go:117] "RemoveContainer" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.068220 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-kdzv4" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.091706 4740 scope.go:117] "RemoveContainer" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.093190 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-5sfmf"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127358 4740 scope.go:117] "RemoveContainer" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: E0216 13:14:11.127688 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": container with ID starting with d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32 not found: ID does not exist" containerID="d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127803 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32"} err="failed to get container status \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": rpc error: code = NotFound desc = could not find container \"d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32\": container with ID starting with d17b4a884b24a12f71a122f2247f37ab8917842c9b00bb490c03541a5980be32 not found: ID does not exist" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.127906 4740 scope.go:117] "RemoveContainer" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: E0216 13:14:11.128185 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": container with ID starting with 93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831 not found: ID does not exist" containerID="93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.128278 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831"} err="failed to get container status \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": rpc error: code = NotFound desc = could not find container \"93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831\": container with ID starting with 93280c02ea6e1af0c830594424c63d9a3b2ad91e3a7b95c9b2de6220c68aa831 not found: ID does not exist" Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.133446 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.141943 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-kdzv4"] Feb 16 13:14:11 crc kubenswrapper[4740]: I0216 13:14:11.303795 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" path="/var/lib/kubelet/pods/2d75f780-0301-46d4-aa0b-ecdf66b8bc21/volumes" Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083154 4740 generic.go:334] "Generic (PLEG): container finished" podID="dc46d93a-139d-4125-9763-1093f49419a5" containerID="e726b282a1b9e748463f67cf69afcf36e85cd50adafa2bc4fa3d417259cd436c" exitCode=0 Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerDied","Data":"e726b282a1b9e748463f67cf69afcf36e85cd50adafa2bc4fa3d417259cd436c"} Feb 16 13:14:12 crc kubenswrapper[4740]: I0216 13:14:12.083266 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerStarted","Data":"4338e12bb91d2ca9c1548e22b396f5f7b2f1f150520816f0c1790eae55547b95"} Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.096512 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" event={"ID":"dc46d93a-139d-4125-9763-1093f49419a5","Type":"ContainerStarted","Data":"57120e1caa117d86690ef3fa842a613ba387873abdadc27842e4798377615c7a"} Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.097046 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:13 crc kubenswrapper[4740]: I0216 13:14:13.124956 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" podStartSLOduration=3.124941057 podStartE2EDuration="3.124941057s" podCreationTimestamp="2026-02-16 13:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:13.122102448 +0000 UTC m=+1280.498451169" watchObservedRunningTime="2026-02-16 13:14:13.124941057 +0000 UTC m=+1280.501289778" Feb 16 13:14:15 crc kubenswrapper[4740]: I0216 13:14:15.574733 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:14:15 crc kubenswrapper[4740]: I0216 13:14:15.575148 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.599041 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c6f6df99-5sfmf" Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.660422 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:20 crc kubenswrapper[4740]: I0216 13:14:20.660752 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-92s29" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" containerID="cri-o://741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" gracePeriod=10 Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.158699 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176777 4740 generic.go:334] "Generic (PLEG): container finished" podID="59421bc1-357f-46f1-857a-57d1562762dc" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" exitCode=0 Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176834 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-92s29" event={"ID":"59421bc1-357f-46f1-857a-57d1562762dc","Type":"ContainerDied","Data":"039a269dce925faacbed19efb42458efe40417fe51c44a7f9115b8283cc86bec"} Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176870 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-92s29" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.176885 4740 scope.go:117] "RemoveContainer" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.200138 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.200306 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201233 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201285 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201309 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201420 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.201551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") pod \"59421bc1-357f-46f1-857a-57d1562762dc\" (UID: \"59421bc1-357f-46f1-857a-57d1562762dc\") " Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.210336 4740 scope.go:117] "RemoveContainer" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.237075 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn" (OuterVolumeSpecName: "kube-api-access-rmzzn") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "kube-api-access-rmzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.259868 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.260218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.270262 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.271355 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.294158 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config" (OuterVolumeSpecName: "config") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.298224 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59421bc1-357f-46f1-857a-57d1562762dc" (UID: "59421bc1-357f-46f1-857a-57d1562762dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304277 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304318 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304332 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304342 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304353 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmzzn\" (UniqueName: \"kubernetes.io/projected/59421bc1-357f-46f1-857a-57d1562762dc-kube-api-access-rmzzn\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304364 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.304373 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59421bc1-357f-46f1-857a-57d1562762dc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363249 4740 scope.go:117] "RemoveContainer" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: E0216 13:14:21.363725 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": container with ID starting with 741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7 not found: ID does not exist" containerID="741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363766 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7"} err="failed to get container status \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": rpc error: code = NotFound desc = could not find container \"741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7\": container with ID starting with 741f203908511f0dcf1a35e9fd84ae9e8b49df4b512b0c86326237e02928a1c7 not found: ID does not exist" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.363792 4740 scope.go:117] "RemoveContainer" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: E0216 13:14:21.364391 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": container with ID starting with cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16 not found: ID does not exist" containerID="cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.364416 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16"} err="failed to get container status \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": rpc error: code = NotFound desc = could not find container \"cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16\": container with ID starting with cb7c3764054784347041c4f99842bf779824a595cd5d861e9ee97b5f71216f16 not found: ID does not exist" Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.510205 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:21 crc kubenswrapper[4740]: I0216 13:14:21.518405 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-92s29"] Feb 16 13:14:23 crc kubenswrapper[4740]: I0216 13:14:23.293853 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59421bc1-357f-46f1-857a-57d1562762dc" path="/var/lib/kubelet/pods/59421bc1-357f-46f1-857a-57d1562762dc/volumes" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.340242 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341090 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341101 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341117 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341123 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341144 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341151 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="init" Feb 16 13:14:29 crc kubenswrapper[4740]: E0216 13:14:29.341179 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341185 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341340 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d75f780-0301-46d4-aa0b-ecdf66b8bc21" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.341354 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="59421bc1-357f-46f1-857a-57d1562762dc" containerName="dnsmasq-dns" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.342016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.347422 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.347564 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.349115 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.351354 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.355677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361749 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.361778 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464245 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.464343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.470486 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.470570 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.471369 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.495736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:29 crc kubenswrapper[4740]: I0216 13:14:29.671804 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.196879 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m"] Feb 16 13:14:30 crc kubenswrapper[4740]: W0216 13:14:30.197105 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e403d2d_bd7d_4fa6_a2a4_e15f63d2b090.slice/crio-e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a WatchSource:0}: Error finding container e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a: Status 404 returned error can't find the container with id e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.200585 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:14:30 crc kubenswrapper[4740]: I0216 13:14:30.283935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerStarted","Data":"e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a"} Feb 16 13:14:33 crc kubenswrapper[4740]: I0216 13:14:33.313071 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ad16000-fb9f-4231-91fe-239907bba675" containerID="469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb" exitCode=0 Feb 16 13:14:33 crc kubenswrapper[4740]: I0216 13:14:33.313457 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerDied","Data":"469d7514fd60c07195ba5fa6d520b6d1b1d0d88a105ce40dcbffad62d47cbeeb"} Feb 16 13:14:34 crc kubenswrapper[4740]: I0216 13:14:34.323673 4740 generic.go:334] "Generic (PLEG): container finished" podID="05abd29a-2c3c-4129-9afd-859a65e1ef45" containerID="7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95" exitCode=0 Feb 16 13:14:34 crc kubenswrapper[4740]: I0216 13:14:34.323773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerDied","Data":"7ede78a0469b9559885c3cc7044e7e86ac21695b914c311b4c62098977e13b95"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.367214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6ad16000-fb9f-4231-91fe-239907bba675","Type":"ContainerStarted","Data":"29ed1f86f1f0b70eb87583ecf27b796826750aeb2029ca351d0659f7b05a4282"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.368154 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.369136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerStarted","Data":"1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.372746 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"05abd29a-2c3c-4129-9afd-859a65e1ef45","Type":"ContainerStarted","Data":"091a01464e132b090da2e3d7e1032571cac74ddefc7f9027a8c4417b4268942c"} Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.373160 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.423901 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.423880028 podStartE2EDuration="42.423880028s" podCreationTimestamp="2026-02-16 13:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:39.401317599 +0000 UTC m=+1306.777666330" watchObservedRunningTime="2026-02-16 13:14:39.423880028 +0000 UTC m=+1306.800228769" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.436333 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" podStartSLOduration=2.421657157 podStartE2EDuration="10.436312648s" podCreationTimestamp="2026-02-16 13:14:29 +0000 UTC" firstStartedPulling="2026-02-16 13:14:30.200347745 +0000 UTC m=+1297.576696466" lastFinishedPulling="2026-02-16 13:14:38.215003226 +0000 UTC m=+1305.591351957" observedRunningTime="2026-02-16 13:14:39.419295873 +0000 UTC m=+1306.795644604" watchObservedRunningTime="2026-02-16 13:14:39.436312648 +0000 UTC m=+1306.812661379" Feb 16 13:14:39 crc kubenswrapper[4740]: I0216 13:14:39.446987 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.446968282 podStartE2EDuration="41.446968282s" podCreationTimestamp="2026-02-16 13:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:14:39.445737714 +0000 UTC m=+1306.822086445" watchObservedRunningTime="2026-02-16 13:14:39.446968282 +0000 UTC m=+1306.823317013" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.575649 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.576304 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.576362 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.577194 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:14:45 crc kubenswrapper[4740]: I0216 13:14:45.577262 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" gracePeriod=600 Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441095 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" exitCode=0 Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1"} Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441738 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} Feb 16 13:14:46 crc kubenswrapper[4740]: I0216 13:14:46.441755 4740 scope.go:117] "RemoveContainer" containerID="3887ea1a7fbb3fb6bf0033560112227b337a28b6336d1a7733acdb37db4dff8f" Feb 16 13:14:48 crc kubenswrapper[4740]: I0216 13:14:48.516025 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.476261 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerID="1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05" exitCode=0 Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.476420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerDied","Data":"1b370219bb2b59bfe8b61d51b3c7656a5cc6a9ac146dec61d19e46870e6cfe05"} Feb 16 13:14:49 crc kubenswrapper[4740]: I0216 13:14:49.547088 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.913447 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.998562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.998935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.999112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:50 crc kubenswrapper[4740]: I0216 13:14:50.999144 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") pod \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\" (UID: \"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090\") " Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.005267 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.005493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w" (OuterVolumeSpecName: "kube-api-access-g8s9w") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "kube-api-access-g8s9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.028990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory" (OuterVolumeSpecName: "inventory") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.030992 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" (UID: "1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100909 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100960 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100977 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8s9w\" (UniqueName: \"kubernetes.io/projected/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-kube-api-access-g8s9w\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.100990 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496198 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" event={"ID":"1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090","Type":"ContainerDied","Data":"e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a"} Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496251 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5400b5f837f9ff75cf03e0fed9bca55825759bc40df50cf99067c551616e47a" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.496317 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.600695 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:51 crc kubenswrapper[4740]: E0216 13:14:51.601136 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.601156 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.601394 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.602097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604466 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604695 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.604721 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.605082 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.613477 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.712929 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.814744 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.818534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.818916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.832505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4c988\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:51 crc kubenswrapper[4740]: I0216 13:14:51.951305 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:52 crc kubenswrapper[4740]: I0216 13:14:52.548356 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988"] Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.514891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerStarted","Data":"8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143"} Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.516202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerStarted","Data":"498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4"} Feb 16 13:14:53 crc kubenswrapper[4740]: I0216 13:14:53.542347 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" podStartSLOduration=2.041711662 podStartE2EDuration="2.542325352s" podCreationTimestamp="2026-02-16 13:14:51 +0000 UTC" firstStartedPulling="2026-02-16 13:14:52.547177932 +0000 UTC m=+1319.923526653" lastFinishedPulling="2026-02-16 13:14:53.047791622 +0000 UTC m=+1320.424140343" observedRunningTime="2026-02-16 13:14:53.53430127 +0000 UTC m=+1320.910650001" watchObservedRunningTime="2026-02-16 13:14:53.542325352 +0000 UTC m=+1320.918674073" Feb 16 13:14:56 crc kubenswrapper[4740]: I0216 13:14:56.547957 4740 generic.go:334] "Generic (PLEG): container finished" podID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerID="8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143" exitCode=0 Feb 16 13:14:56 crc kubenswrapper[4740]: I0216 13:14:56.548014 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerDied","Data":"8b189f58d2d58c6ee0124c58c80471da26acb462e4179a056e84d0ee5ab1e143"} Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.009261 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058049 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.058083 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") pod \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\" (UID: \"2abfe09c-2736-49b3-b4e5-fb0e30deb510\") " Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.063652 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc" (OuterVolumeSpecName: "kube-api-access-86nwc") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "kube-api-access-86nwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.087984 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.088429 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory" (OuterVolumeSpecName: "inventory") pod "2abfe09c-2736-49b3-b4e5-fb0e30deb510" (UID: "2abfe09c-2736-49b3-b4e5-fb0e30deb510"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160469 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nwc\" (UniqueName: \"kubernetes.io/projected/2abfe09c-2736-49b3-b4e5-fb0e30deb510-kube-api-access-86nwc\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160514 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.160527 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2abfe09c-2736-49b3-b4e5-fb0e30deb510-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" event={"ID":"2abfe09c-2736-49b3-b4e5-fb0e30deb510","Type":"ContainerDied","Data":"498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4"} Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571179 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498ea411b013d22580800000c27282d8c77a5f1c0ef053cee727550c62fa60f4" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.571183 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4c988" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633252 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:58 crc kubenswrapper[4740]: E0216 13:14:58.633661 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633677 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.633864 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abfe09c-2736-49b3-b4e5-fb0e30deb510" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.634587 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637016 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637350 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.637536 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.639138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.661948 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671221 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.671291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772769 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.772988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.773084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.778295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.778842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.779489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.791173 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:58 crc kubenswrapper[4740]: I0216 13:14:58.957108 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:14:59 crc kubenswrapper[4740]: I0216 13:14:59.473016 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8"] Feb 16 13:14:59 crc kubenswrapper[4740]: W0216 13:14:59.473648 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e96214f_a46e_451a_97d9_d448c66826f4.slice/crio-8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60 WatchSource:0}: Error finding container 8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60: Status 404 returned error can't find the container with id 8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60 Feb 16 13:14:59 crc kubenswrapper[4740]: I0216 13:14:59.580394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerStarted","Data":"8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60"} Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.148970 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.201691 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.202369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.204921 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.204938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.307216 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.308046 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.308178 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.410650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.411869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.416958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.432032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"collect-profiles-29520795-jf4h8\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.592035 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerStarted","Data":"7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54"} Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.620497 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" podStartSLOduration=1.8843872689999999 podStartE2EDuration="2.620476543s" podCreationTimestamp="2026-02-16 13:14:58 +0000 UTC" firstStartedPulling="2026-02-16 13:14:59.476913643 +0000 UTC m=+1326.853262454" lastFinishedPulling="2026-02-16 13:15:00.213003007 +0000 UTC m=+1327.589351728" observedRunningTime="2026-02-16 13:15:00.611843732 +0000 UTC m=+1327.988192453" watchObservedRunningTime="2026-02-16 13:15:00.620476543 +0000 UTC m=+1327.996825284" Feb 16 13:15:00 crc kubenswrapper[4740]: I0216 13:15:00.675317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:01 crc kubenswrapper[4740]: W0216 13:15:01.120492 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab47f99f_f805_4d2e_bdf6_6da944e511a5.slice/crio-6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f WatchSource:0}: Error finding container 6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f: Status 404 returned error can't find the container with id 6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.140764 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608222 4740 generic.go:334] "Generic (PLEG): container finished" podID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerID="abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df" exitCode=0 Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerDied","Data":"abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df"} Feb 16 13:15:01 crc kubenswrapper[4740]: I0216 13:15:01.608659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerStarted","Data":"6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f"} Feb 16 13:15:02 crc kubenswrapper[4740]: I0216 13:15:02.885137 4740 scope.go:117] "RemoveContainer" containerID="393ab583d053b27f8beb9f7c43ec09687ddca9d3ed124563beb0f63d010c9ebb" Feb 16 13:15:02 crc kubenswrapper[4740]: I0216 13:15:02.918662 4740 scope.go:117] "RemoveContainer" containerID="538fc5b7f98d6ae84456c8d0c054c6e4ef97df100afc94d9176239a93296b9b5" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.064942 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.168903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") pod \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\" (UID: \"ab47f99f-f805-4d2e-bdf6-6da944e511a5\") " Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.169258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.169327 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab47f99f-f805-4d2e-bdf6-6da944e511a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.175040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj" (OuterVolumeSpecName: "kube-api-access-rxbhj") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "kube-api-access-rxbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.175743 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab47f99f-f805-4d2e-bdf6-6da944e511a5" (UID: "ab47f99f-f805-4d2e-bdf6-6da944e511a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.271596 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbhj\" (UniqueName: \"kubernetes.io/projected/ab47f99f-f805-4d2e-bdf6-6da944e511a5-kube-api-access-rxbhj\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.271944 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab47f99f-f805-4d2e-bdf6-6da944e511a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" event={"ID":"ab47f99f-f805-4d2e-bdf6-6da944e511a5","Type":"ContainerDied","Data":"6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f"} Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637305 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4d0687b9987c3a7c64b652d1e8c1e26b07165519c44e31dd0795b908ade45f" Feb 16 13:15:03 crc kubenswrapper[4740]: I0216 13:15:03.637361 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.130655 4740 scope.go:117] "RemoveContainer" containerID="4b198406a524d7dff3e729a1eee0d73938c8ae12df658ec8480ab9355f0779b0" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.156077 4740 scope.go:117] "RemoveContainer" containerID="b6158fa1fc9cea7906b55a39bb9178812e4aaa28f5f59307e4b0bc0142b18d51" Feb 16 13:16:03 crc kubenswrapper[4740]: I0216 13:16:03.237735 4740 scope.go:117] "RemoveContainer" containerID="8f70084c6e792770a51a2232e265409290d0a24738129fda218357cafbb39d87" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.327455 4740 scope.go:117] "RemoveContainer" containerID="3eccc44255e03c3377abc687e9c41e721ea09718010749b621343a7f92179705" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.352557 4740 scope.go:117] "RemoveContainer" containerID="8ce27faa665c6476b0cc53766481f3f6617cbb88c529599da7ba09a849b8b74d" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.371274 4740 scope.go:117] "RemoveContainer" containerID="d6f9feb8edce8f2ec6ae24391658e5a12b683ef4cdb4b51bd4bc709071ae093a" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.392973 4740 scope.go:117] "RemoveContainer" containerID="915edf68cd3c7270b236b58655ea096c142d3604b5d0dd9bafbc0091d2a43aae" Feb 16 13:17:03 crc kubenswrapper[4740]: I0216 13:17:03.412275 4740 scope.go:117] "RemoveContainer" containerID="b47ff1a3cc1cf6e6ce635669516cb1eeff7f42e4967363ec4445652f9d813b11" Feb 16 13:17:15 crc kubenswrapper[4740]: I0216 13:17:15.575190 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:17:15 crc kubenswrapper[4740]: I0216 13:17:15.575905 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:17:45 crc kubenswrapper[4740]: I0216 13:17:45.574696 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:17:45 crc kubenswrapper[4740]: I0216 13:17:45.575279 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.354474 4740 generic.go:334] "Generic (PLEG): container finished" podID="8e96214f-a46e-451a-97d9-d448c66826f4" containerID="7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54" exitCode=0 Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.354581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerDied","Data":"7e2f7272c67b8c08fe7c64c98a0e4e52cdba5944d66dcc2cf2fb8eaf76c9dc54"} Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.462402 4740 scope.go:117] "RemoveContainer" containerID="14b7b362722a6742396e9c6ada9fc6b8542386c6aa6a2d0db427fdcad3db1b40" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.482632 4740 scope.go:117] "RemoveContainer" containerID="f251def4f6e79c938ed791e78c8d49d0d744dfe6ab6538388a7c75361bdf2939" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.503759 4740 scope.go:117] "RemoveContainer" containerID="10c1145d53cd2b89a58d20979cc3e78503d07b38832842c393d26e60199595dd" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.525544 4740 scope.go:117] "RemoveContainer" containerID="a1fe664ff4dba633e5c2ceeabea6e248cc29fb01509a09c4d810877a6db7482b" Feb 16 13:18:03 crc kubenswrapper[4740]: I0216 13:18:03.541720 4740 scope.go:117] "RemoveContainer" containerID="3c08f3c4d132ba264a22af327ebe21b3c3ed4324088fef015b40790eb4878e70" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.318640 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" event={"ID":"8e96214f-a46e-451a-97d9-d448c66826f4","Type":"ContainerDied","Data":"8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60"} Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379695 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9ee2d3258c87bd80e9e740c38e3e45b18ec4e99dedf60177b7df1ac0d43d60" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.379729 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386317 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386376 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386430 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.386456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") pod \"8e96214f-a46e-451a-97d9-d448c66826f4\" (UID: \"8e96214f-a46e-451a-97d9-d448c66826f4\") " Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.396516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.396519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79" (OuterVolumeSpecName: "kube-api-access-m7m79") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "kube-api-access-m7m79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.424140 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory" (OuterVolumeSpecName: "inventory") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.441031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e96214f-a46e-451a-97d9-d448c66826f4" (UID: "8e96214f-a46e-451a-97d9-d448c66826f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.469601 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:05 crc kubenswrapper[4740]: E0216 13:18:05.470043 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470059 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: E0216 13:18:05.470093 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470099 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470289 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" containerName="collect-profiles" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470313 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e96214f-a46e-451a-97d9-d448c66826f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.470938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488675 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7m79\" (UniqueName: \"kubernetes.io/projected/8e96214f-a46e-451a-97d9-d448c66826f4-kube-api-access-m7m79\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488851 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488916 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.488974 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e96214f-a46e-451a-97d9-d448c66826f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.491084 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.590974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.591022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.591086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692358 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.692445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.697221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.698374 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.723453 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:05 crc kubenswrapper[4740]: I0216 13:18:05.840631 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:18:06 crc kubenswrapper[4740]: I0216 13:18:06.368273 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g"] Feb 16 13:18:06 crc kubenswrapper[4740]: I0216 13:18:06.390010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerStarted","Data":"fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57"} Feb 16 13:18:07 crc kubenswrapper[4740]: I0216 13:18:07.400192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerStarted","Data":"1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7"} Feb 16 13:18:07 crc kubenswrapper[4740]: I0216 13:18:07.421631 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" podStartSLOduration=1.8797320929999999 podStartE2EDuration="2.421609889s" podCreationTimestamp="2026-02-16 13:18:05 +0000 UTC" firstStartedPulling="2026-02-16 13:18:06.351578088 +0000 UTC m=+1513.727926819" lastFinishedPulling="2026-02-16 13:18:06.893455894 +0000 UTC m=+1514.269804615" observedRunningTime="2026-02-16 13:18:07.416105766 +0000 UTC m=+1514.792454487" watchObservedRunningTime="2026-02-16 13:18:07.421609889 +0000 UTC m=+1514.797958610" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575164 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575750 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.575803 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.576419 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:18:15 crc kubenswrapper[4740]: I0216 13:18:15.576484 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" gracePeriod=600 Feb 16 13:18:15 crc kubenswrapper[4740]: E0216 13:18:15.702711 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.481779 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" exitCode=0 Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.481909 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e"} Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.482001 4740 scope.go:117] "RemoveContainer" containerID="330ca6e50d6523dd1e224885a601f2da2f7f7c8f0b2acff53d1e7af3aabbc8e1" Feb 16 13:18:16 crc kubenswrapper[4740]: I0216 13:18:16.482926 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:16 crc kubenswrapper[4740]: E0216 13:18:16.483515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:31 crc kubenswrapper[4740]: I0216 13:18:31.281232 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:31 crc kubenswrapper[4740]: E0216 13:18:31.282141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:44 crc kubenswrapper[4740]: I0216 13:18:44.283354 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:44 crc kubenswrapper[4740]: E0216 13:18:44.284557 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:57 crc kubenswrapper[4740]: I0216 13:18:57.281967 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:18:57 crc kubenswrapper[4740]: E0216 13:18:57.283020 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.048312 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.060001 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.070602 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.079884 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8cb8-account-create-update-dgv8s"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.088240 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.096521 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9mvdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.103977 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.112486 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.122658 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4b9b-account-create-update-njhb7"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.131010 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7989-account-create-update-s6gss"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.138354 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nxmdt"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.147009 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9v664"] Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.296234 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c97501-5a5c-4e03-8e50-cf7422806c32" path="/var/lib/kubelet/pods/14c97501-5a5c-4e03-8e50-cf7422806c32/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.296959 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4996abf8-6c4b-42d0-99f2-aeacf2fd5591" path="/var/lib/kubelet/pods/4996abf8-6c4b-42d0-99f2-aeacf2fd5591/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.297494 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59544dcd-0bd1-4b5f-abf6-9ab972168af0" path="/var/lib/kubelet/pods/59544dcd-0bd1-4b5f-abf6-9ab972168af0/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.298130 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b945754-b567-43e9-a84a-4e0ea95900e7" path="/var/lib/kubelet/pods/5b945754-b567-43e9-a84a-4e0ea95900e7/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.299286 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12e494a-5467-4264-a0e5-2596c61b4a73" path="/var/lib/kubelet/pods/b12e494a-5467-4264-a0e5-2596c61b4a73/volumes" Feb 16 13:18:59 crc kubenswrapper[4740]: I0216 13:18:59.299850 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb88b05d-b7b7-4a08-847c-5e8d5cc98477" path="/var/lib/kubelet/pods/bb88b05d-b7b7-4a08-847c-5e8d5cc98477/volumes" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.605586 4740 scope.go:117] "RemoveContainer" containerID="a9f2fb916ca14b8c5ef554516013184723e7f73663c25f8d4596934aa4c48ae1" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.634332 4740 scope.go:117] "RemoveContainer" containerID="494606bd64bc796891b73eef0c184bf2997ed3f769f9febfe8dd04a4677e5715" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.668657 4740 scope.go:117] "RemoveContainer" containerID="373fd871381d49fd63e5ca3ab666f3487ac9b7f0d28abe89d7c9eb2229c50cd0" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.711748 4740 scope.go:117] "RemoveContainer" containerID="baf6eedd884c010f372a94d42ad034029305e68987497ddad6d85e711b8ce518" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.753270 4740 scope.go:117] "RemoveContainer" containerID="90265e42eb4894d283c546642bcf3972b8e18c6c5cd6a445431640804cc73965" Feb 16 13:19:03 crc kubenswrapper[4740]: I0216 13:19:03.791532 4740 scope.go:117] "RemoveContainer" containerID="4ae899300c9a6a6072cde926ba47e7d60a16bc7eccd0ef24bf505410cc36580f" Feb 16 13:19:10 crc kubenswrapper[4740]: I0216 13:19:10.282082 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:10 crc kubenswrapper[4740]: E0216 13:19:10.282952 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.487524 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.491408 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.496797 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589228 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.589633 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691238 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691881 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.691908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.710501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"certified-operators-glnbq\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:20 crc kubenswrapper[4740]: I0216 13:19:20.855431 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:21 crc kubenswrapper[4740]: I0216 13:19:21.376127 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088059 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" exitCode=0 Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088139 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22"} Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.088453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerStarted","Data":"bbe5d05db57db865b1503b876991dc3aec22375e5378fa75aebb73d65f05e85d"} Feb 16 13:19:22 crc kubenswrapper[4740]: I0216 13:19:22.281358 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:22 crc kubenswrapper[4740]: E0216 13:19:22.281684 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:24 crc kubenswrapper[4740]: I0216 13:19:24.110079 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" exitCode=0 Feb 16 13:19:24 crc kubenswrapper[4740]: I0216 13:19:24.110358 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c"} Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.039753 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.051279 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gqbdm"] Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.119455 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerStarted","Data":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.137889 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-glnbq" podStartSLOduration=2.700881147 podStartE2EDuration="5.137870747s" podCreationTimestamp="2026-02-16 13:19:20 +0000 UTC" firstStartedPulling="2026-02-16 13:19:22.090037521 +0000 UTC m=+1589.466386262" lastFinishedPulling="2026-02-16 13:19:24.527027141 +0000 UTC m=+1591.903375862" observedRunningTime="2026-02-16 13:19:25.13509902 +0000 UTC m=+1592.511447741" watchObservedRunningTime="2026-02-16 13:19:25.137870747 +0000 UTC m=+1592.514219458" Feb 16 13:19:25 crc kubenswrapper[4740]: I0216 13:19:25.292044 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15147587-626f-4577-b5af-b8f574f60152" path="/var/lib/kubelet/pods/15147587-626f-4577-b5af-b8f574f60152/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.035865 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.048114 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.061822 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.073318 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.081932 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vf54h"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.091611 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2e1c-account-create-update-htmg9"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.099580 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-j27bj"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.107325 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-858d-account-create-update-xr2fs"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.114750 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.122906 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.130474 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-plzhg"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.137471 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f6f4-account-create-update-l7nbq"] Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.291106 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5296850e-63c0-4801-bff8-bc5213555f58" path="/var/lib/kubelet/pods/5296850e-63c0-4801-bff8-bc5213555f58/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.291731 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634925bb-5381-4298-a256-447ef56a2f2a" path="/var/lib/kubelet/pods/634925bb-5381-4298-a256-447ef56a2f2a/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.292265 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65301f64-cd42-4faf-b454-a43c7c7096a1" path="/var/lib/kubelet/pods/65301f64-cd42-4faf-b454-a43c7c7096a1/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.292759 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685d1543-1ab9-435f-b2c0-2a54c104e86f" path="/var/lib/kubelet/pods/685d1543-1ab9-435f-b2c0-2a54c104e86f/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.293936 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aafb0ee-2681-48a9-b1e0-2442d0a16541" path="/var/lib/kubelet/pods/9aafb0ee-2681-48a9-b1e0-2442d0a16541/volumes" Feb 16 13:19:29 crc kubenswrapper[4740]: I0216 13:19:29.294433 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14f3fd5-4d53-4336-85b1-7d636060bd0a" path="/var/lib/kubelet/pods/a14f3fd5-4d53-4336-85b1-7d636060bd0a/volumes" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.856288 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.857540 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:30 crc kubenswrapper[4740]: I0216 13:19:30.919028 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:31 crc kubenswrapper[4740]: I0216 13:19:31.218743 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:31 crc kubenswrapper[4740]: I0216 13:19:31.307936 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.198324 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-glnbq" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" containerID="cri-o://1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" gracePeriod=2 Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.747097 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753098 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753167 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.753265 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") pod \"4cafc58b-221a-4319-b03c-b2854606f194\" (UID: \"4cafc58b-221a-4319-b03c-b2854606f194\") " Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.754315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities" (OuterVolumeSpecName: "utilities") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.760708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l" (OuterVolumeSpecName: "kube-api-access-sp49l") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "kube-api-access-sp49l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.855510 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.855544 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp49l\" (UniqueName: \"kubernetes.io/projected/4cafc58b-221a-4319-b03c-b2854606f194-kube-api-access-sp49l\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.954218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cafc58b-221a-4319-b03c-b2854606f194" (UID: "4cafc58b-221a-4319-b03c-b2854606f194"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:19:33 crc kubenswrapper[4740]: I0216 13:19:33.957271 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cafc58b-221a-4319-b03c-b2854606f194-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.045840 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.063488 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qvqg7"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212005 4740 generic.go:334] "Generic (PLEG): container finished" podID="4cafc58b-221a-4319-b03c-b2854606f194" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" exitCode=0 Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212110 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glnbq" event={"ID":"4cafc58b-221a-4319-b03c-b2854606f194","Type":"ContainerDied","Data":"bbe5d05db57db865b1503b876991dc3aec22375e5378fa75aebb73d65f05e85d"} Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glnbq" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.212158 4740 scope.go:117] "RemoveContainer" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.231470 4740 scope.go:117] "RemoveContainer" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.247552 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.255741 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-glnbq"] Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.273784 4740 scope.go:117] "RemoveContainer" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.296805 4740 scope.go:117] "RemoveContainer" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.297248 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": container with ID starting with 1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6 not found: ID does not exist" containerID="1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297300 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6"} err="failed to get container status \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": rpc error: code = NotFound desc = could not find container \"1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6\": container with ID starting with 1b5fd6a8280537258a43ef14582e98b1cf938334cc595af0d4d9ee5162562ea6 not found: ID does not exist" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297366 4740 scope.go:117] "RemoveContainer" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.297798 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": container with ID starting with b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c not found: ID does not exist" containerID="b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297845 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c"} err="failed to get container status \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": rpc error: code = NotFound desc = could not find container \"b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c\": container with ID starting with b5822d14f04f407167f62e6d5791463ede08cda9ffec0c29a4abb2e062972d2c not found: ID does not exist" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.297866 4740 scope.go:117] "RemoveContainer" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: E0216 13:19:34.298106 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": container with ID starting with c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22 not found: ID does not exist" containerID="c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22" Feb 16 13:19:34 crc kubenswrapper[4740]: I0216 13:19:34.298141 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22"} err="failed to get container status \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": rpc error: code = NotFound desc = could not find container \"c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22\": container with ID starting with c47acac96899fb9dad1ed27de353805a98512d532dd59be7ac657591eb9eed22 not found: ID does not exist" Feb 16 13:19:35 crc kubenswrapper[4740]: I0216 13:19:35.298365 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cea0875-b3a8-4a52-84ff-d9215408294b" path="/var/lib/kubelet/pods/3cea0875-b3a8-4a52-84ff-d9215408294b/volumes" Feb 16 13:19:35 crc kubenswrapper[4740]: I0216 13:19:35.298971 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cafc58b-221a-4319-b03c-b2854606f194" path="/var/lib/kubelet/pods/4cafc58b-221a-4319-b03c-b2854606f194/volumes" Feb 16 13:19:37 crc kubenswrapper[4740]: I0216 13:19:37.281752 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:37 crc kubenswrapper[4740]: E0216 13:19:37.283188 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.047744 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.051133 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-content" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.051683 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-content" Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.051881 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-utilities" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.051959 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="extract-utilities" Feb 16 13:19:40 crc kubenswrapper[4740]: E0216 13:19:40.052071 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.052140 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.052422 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cafc58b-221a-4319-b03c-b2854606f194" containerName="registry-server" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.054261 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.087020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186419 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.186488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.288983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.289543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-catalog-content\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.289666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805f4cce-9373-4649-8daa-e97ab900433f-utilities\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.321101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vxn\" (UniqueName: \"kubernetes.io/projected/805f4cce-9373-4649-8daa-e97ab900433f-kube-api-access-w9vxn\") pod \"community-operators-zzh8l\" (UID: \"805f4cce-9373-4649-8daa-e97ab900433f\") " pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.388244 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:40 crc kubenswrapper[4740]: I0216 13:19:40.948220 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296454 4740 generic.go:334] "Generic (PLEG): container finished" podID="805f4cce-9373-4649-8daa-e97ab900433f" containerID="6331156baa2822a146150f209c54b198ff90e84ba91ea60edd9d4639e468a3d2" exitCode=0 Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerDied","Data":"6331156baa2822a146150f209c54b198ff90e84ba91ea60edd9d4639e468a3d2"} Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.296544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerStarted","Data":"15fb8bcf5417039efc2de024645358e14ed957b9f5d08a68c15b0abf1eb6f47a"} Feb 16 13:19:41 crc kubenswrapper[4740]: I0216 13:19:41.299408 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:19:45 crc kubenswrapper[4740]: I0216 13:19:45.354513 4740 generic.go:334] "Generic (PLEG): container finished" podID="805f4cce-9373-4649-8daa-e97ab900433f" containerID="4486e629945062b7cb8b99f9c66aad8c1cc72225676f5f670d4681bc91d01b42" exitCode=0 Feb 16 13:19:45 crc kubenswrapper[4740]: I0216 13:19:45.354801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerDied","Data":"4486e629945062b7cb8b99f9c66aad8c1cc72225676f5f670d4681bc91d01b42"} Feb 16 13:19:46 crc kubenswrapper[4740]: I0216 13:19:46.368471 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzh8l" event={"ID":"805f4cce-9373-4649-8daa-e97ab900433f","Type":"ContainerStarted","Data":"92b4e33fc0c95830871b37ce7824b131631f109024b3d7bdeef190168f6c3939"} Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.373305 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:19:50 crc kubenswrapper[4740]: E0216 13:19:50.374048 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.389102 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.389450 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.461988 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:19:50 crc kubenswrapper[4740]: I0216 13:19:50.488875 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzh8l" podStartSLOduration=5.791046971 podStartE2EDuration="10.488858889s" podCreationTimestamp="2026-02-16 13:19:40 +0000 UTC" firstStartedPulling="2026-02-16 13:19:41.299187519 +0000 UTC m=+1608.675536240" lastFinishedPulling="2026-02-16 13:19:45.996999397 +0000 UTC m=+1613.373348158" observedRunningTime="2026-02-16 13:19:46.390233433 +0000 UTC m=+1613.766582154" watchObservedRunningTime="2026-02-16 13:19:50.488858889 +0000 UTC m=+1617.865207610" Feb 16 13:19:54 crc kubenswrapper[4740]: I0216 13:19:54.456704 4740 generic.go:334] "Generic (PLEG): container finished" podID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerID="1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7" exitCode=0 Feb 16 13:19:54 crc kubenswrapper[4740]: I0216 13:19:54.456847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerDied","Data":"1a6e7751ff12660592cb0af45868f0caa9cc493451bc41d110b28a333756e5e7"} Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.848313 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.903215 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") pod \"fe15334d-14c1-4670-89fe-3b7d4864b782\" (UID: \"fe15334d-14c1-4670-89fe-3b7d4864b782\") " Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.910710 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx" (OuterVolumeSpecName: "kube-api-access-j9lzx") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "kube-api-access-j9lzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.931707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:19:55 crc kubenswrapper[4740]: I0216 13:19:55.933051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory" (OuterVolumeSpecName: "inventory") pod "fe15334d-14c1-4670-89fe-3b7d4864b782" (UID: "fe15334d-14c1-4670-89fe-3b7d4864b782"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004712 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lzx\" (UniqueName: \"kubernetes.io/projected/fe15334d-14c1-4670-89fe-3b7d4864b782-kube-api-access-j9lzx\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004747 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.004757 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe15334d-14c1-4670-89fe-3b7d4864b782-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" event={"ID":"fe15334d-14c1-4670-89fe-3b7d4864b782","Type":"ContainerDied","Data":"fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57"} Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480468 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd35047359348bcf6757809aaf75d042ab2e0b5ade3ef1747ba8159e7d69ef57" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.480200 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.566534 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:56 crc kubenswrapper[4740]: E0216 13:19:56.567205 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.567346 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.567709 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe15334d-14c1-4670-89fe-3b7d4864b782" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.568613 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.571799 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.572059 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.572790 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.573143 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.578847 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.616205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.717654 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.718003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.718212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.724981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.735972 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.739487 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:56 crc kubenswrapper[4740]: I0216 13:19:56.889595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:19:57 crc kubenswrapper[4740]: I0216 13:19:57.388999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg"] Feb 16 13:19:57 crc kubenswrapper[4740]: I0216 13:19:57.488330 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerStarted","Data":"cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6"} Feb 16 13:19:58 crc kubenswrapper[4740]: I0216 13:19:58.501960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerStarted","Data":"d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0"} Feb 16 13:19:58 crc kubenswrapper[4740]: I0216 13:19:58.524534 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" podStartSLOduration=2.030285704 podStartE2EDuration="2.52451749s" podCreationTimestamp="2026-02-16 13:19:56 +0000 UTC" firstStartedPulling="2026-02-16 13:19:57.393255004 +0000 UTC m=+1624.769603745" lastFinishedPulling="2026-02-16 13:19:57.88748679 +0000 UTC m=+1625.263835531" observedRunningTime="2026-02-16 13:19:58.515724554 +0000 UTC m=+1625.892073275" watchObservedRunningTime="2026-02-16 13:19:58.52451749 +0000 UTC m=+1625.900866211" Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.440223 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzh8l" Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.514256 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzh8l"] Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.555752 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:00 crc kubenswrapper[4740]: I0216 13:20:00.556023 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-czkjl" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" containerID="cri-o://b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" gracePeriod=2 Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.021414 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.066625 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.082009 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.090418 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hclws"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.104805 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.105159 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.105372 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") pod \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\" (UID: \"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf\") " Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.107833 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7lg27"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.127479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities" (OuterVolumeSpecName: "utilities") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.127600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp" (OuterVolumeSpecName: "kube-api-access-xbgkp") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "kube-api-access-xbgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.161042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" (UID: "6ca213d9-ef6f-4240-aa95-fe7f4e2691cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207363 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbgkp\" (UniqueName: \"kubernetes.io/projected/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-kube-api-access-xbgkp\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207392 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.207404 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.291979 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c41d146-de9f-4d90-bb9e-6c12fc832650" path="/var/lib/kubelet/pods/2c41d146-de9f-4d90-bb9e-6c12fc832650/volumes" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.292638 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f092c8c4-9a32-4093-9a5c-bc5fd05d600e" path="/var/lib/kubelet/pods/f092c8c4-9a32-4093-9a5c-bc5fd05d600e/volumes" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542619 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" exitCode=0 Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542678 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542704 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-czkjl" event={"ID":"6ca213d9-ef6f-4240-aa95-fe7f4e2691cf","Type":"ContainerDied","Data":"98fe05c00f99e38008108c07c4337266311b937b88c3c95f0dd66f754946345d"} Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542720 4740 scope.go:117] "RemoveContainer" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.542862 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-czkjl" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.565207 4740 scope.go:117] "RemoveContainer" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.568406 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.576364 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-czkjl"] Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.587738 4740 scope.go:117] "RemoveContainer" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.638826 4740 scope.go:117] "RemoveContainer" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639174 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": container with ID starting with b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b not found: ID does not exist" containerID="b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639244 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b"} err="failed to get container status \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": rpc error: code = NotFound desc = could not find container \"b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b\": container with ID starting with b5b360903108b6eedbb12f797b8ade65af88136e2ce5e71252856a95aa12ee8b not found: ID does not exist" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639269 4740 scope.go:117] "RemoveContainer" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639511 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": container with ID starting with 591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767 not found: ID does not exist" containerID="591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639541 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767"} err="failed to get container status \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": rpc error: code = NotFound desc = could not find container \"591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767\": container with ID starting with 591f8e6a5b781bee2a72d0da510a1b6248cfaf6e2d606df81c1697d6a9178767 not found: ID does not exist" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639562 4740 scope.go:117] "RemoveContainer" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: E0216 13:20:01.639736 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": container with ID starting with a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6 not found: ID does not exist" containerID="a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6" Feb 16 13:20:01 crc kubenswrapper[4740]: I0216 13:20:01.639766 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6"} err="failed to get container status \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": rpc error: code = NotFound desc = could not find container \"a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6\": container with ID starting with a3b3d25fbd8f0cf4c6ad9ce6454431d246d4416aa222ba1820e11323167ab5c6 not found: ID does not exist" Feb 16 13:20:02 crc kubenswrapper[4740]: I0216 13:20:02.282068 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:02 crc kubenswrapper[4740]: E0216 13:20:02.283041 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:03 crc kubenswrapper[4740]: I0216 13:20:03.292553 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" path="/var/lib/kubelet/pods/6ca213d9-ef6f-4240-aa95-fe7f4e2691cf/volumes" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.479802 4740 scope.go:117] "RemoveContainer" containerID="f1ee4fdc8a66d1ba0f722901509cbb10e2513b2d2385aba5e0bb1ae87766bf21" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.553844 4740 scope.go:117] "RemoveContainer" containerID="2c739ebc1f1677ff442d40ca962571c9b9950c0f73007406504adc0d79d91e7b" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.619650 4740 scope.go:117] "RemoveContainer" containerID="c259d38cb4fa3c5851c1172b3420cec9a5f775ccc35003b355c462a18e258ac9" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.655309 4740 scope.go:117] "RemoveContainer" containerID="a426852617c1fdbfaae0a0c105e30e4a9ba96bd1307ceb03aae494de8c516444" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.691192 4740 scope.go:117] "RemoveContainer" containerID="f2c94c167796a74c5aec9d021793c96199b01a3ee67b46b0fd7d1575574cf5b7" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.729911 4740 scope.go:117] "RemoveContainer" containerID="032b7ecb51e2df34f10ba43675ea39076c3d719ca854e2ab5f2977210eadf6f6" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.780236 4740 scope.go:117] "RemoveContainer" containerID="297fab87042f05bdda341fb78ed7de393ee4aec91b3ea8c4dbadb862e85e4e33" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.800120 4740 scope.go:117] "RemoveContainer" containerID="f5820346a7406bd0978f9265ad799cb90df8fd2faf62bf128649990ff88a581c" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.823888 4740 scope.go:117] "RemoveContainer" containerID="cd58c8b5fc614deaab5c81fb1b971a1824dde743ab1799ae9f95e3e1c7789b94" Feb 16 13:20:04 crc kubenswrapper[4740]: I0216 13:20:04.842436 4740 scope.go:117] "RemoveContainer" containerID="afb5050141aa0fbb8480e6bcc95d53720db77edafe16a89f557711467d506eaf" Feb 16 13:20:10 crc kubenswrapper[4740]: I0216 13:20:10.064485 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:20:10 crc kubenswrapper[4740]: I0216 13:20:10.083684 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lxgpl"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.044464 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.055524 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.071107 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d9rnm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.081883 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dlcqm"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092400 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092787 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092804 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092835 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-utilities" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092841 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-utilities" Feb 16 13:20:11 crc kubenswrapper[4740]: E0216 13:20:11.092851 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-content" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.092876 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="extract-content" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.093076 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca213d9-ef6f-4240-aa95-fe7f4e2691cf" containerName="registry-server" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.094470 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.100422 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190292 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.190450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291514 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.291676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.292790 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa1d954-018c-45a1-93e6-149318cdda8c" path="/var/lib/kubelet/pods/2fa1d954-018c-45a1-93e6-149318cdda8c/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.293805 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63f4468-5c78-4dfd-a40a-302877eba3dc" path="/var/lib/kubelet/pods/b63f4468-5c78-4dfd-a40a-302877eba3dc/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.294541 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1263236-13e5-4a79-b19a-96f535ae0783" path="/var/lib/kubelet/pods/c1263236-13e5-4a79-b19a-96f535ae0783/volumes" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.311009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"redhat-marketplace-f4gfd\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.413770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:11 crc kubenswrapper[4740]: I0216 13:20:11.847895 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658108 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" exitCode=0 Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d"} Feb 16 13:20:12 crc kubenswrapper[4740]: I0216 13:20:12.658207 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerStarted","Data":"7bb32512b48a2ef2c26686beead27117e013fd16c2ee07f289aa711fb236a4ed"} Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.282120 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:14 crc kubenswrapper[4740]: E0216 13:20:14.283262 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.680849 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" exitCode=0 Feb 16 13:20:14 crc kubenswrapper[4740]: I0216 13:20:14.680896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a"} Feb 16 13:20:15 crc kubenswrapper[4740]: I0216 13:20:15.689764 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerStarted","Data":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} Feb 16 13:20:15 crc kubenswrapper[4740]: I0216 13:20:15.708089 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4gfd" podStartSLOduration=2.020969719 podStartE2EDuration="4.70807212s" podCreationTimestamp="2026-02-16 13:20:11 +0000 UTC" firstStartedPulling="2026-02-16 13:20:12.660274435 +0000 UTC m=+1640.036623156" lastFinishedPulling="2026-02-16 13:20:15.347376846 +0000 UTC m=+1642.723725557" observedRunningTime="2026-02-16 13:20:15.707494012 +0000 UTC m=+1643.083842753" watchObservedRunningTime="2026-02-16 13:20:15.70807212 +0000 UTC m=+1643.084420841" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.414093 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.414732 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.486685 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.792462 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:21 crc kubenswrapper[4740]: I0216 13:20:21.849709 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:23 crc kubenswrapper[4740]: I0216 13:20:23.794694 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4gfd" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" containerID="cri-o://cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" gracePeriod=2 Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.240268 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.365788 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.366190 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.366304 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") pod \"f03eabf3-cb8f-4391-bafc-374ea00b3058\" (UID: \"f03eabf3-cb8f-4391-bafc-374ea00b3058\") " Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.367311 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities" (OuterVolumeSpecName: "utilities") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.372278 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j" (OuterVolumeSpecName: "kube-api-access-2pl9j") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "kube-api-access-2pl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.393122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f03eabf3-cb8f-4391-bafc-374ea00b3058" (UID: "f03eabf3-cb8f-4391-bafc-374ea00b3058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.468849 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.469644 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03eabf3-cb8f-4391-bafc-374ea00b3058-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.469749 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pl9j\" (UniqueName: \"kubernetes.io/projected/f03eabf3-cb8f-4391-bafc-374ea00b3058-kube-api-access-2pl9j\") on node \"crc\" DevicePath \"\"" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813085 4740 generic.go:334] "Generic (PLEG): container finished" podID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" exitCode=0 Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813135 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813146 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4gfd" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4gfd" event={"ID":"f03eabf3-cb8f-4391-bafc-374ea00b3058","Type":"ContainerDied","Data":"7bb32512b48a2ef2c26686beead27117e013fd16c2ee07f289aa711fb236a4ed"} Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.813195 4740 scope.go:117] "RemoveContainer" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.837290 4740 scope.go:117] "RemoveContainer" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.864268 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.871842 4740 scope.go:117] "RemoveContainer" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.881081 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4gfd"] Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.915088 4740 scope.go:117] "RemoveContainer" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.915939 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": container with ID starting with cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff not found: ID does not exist" containerID="cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.915984 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff"} err="failed to get container status \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": rpc error: code = NotFound desc = could not find container \"cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff\": container with ID starting with cd704f0b4a319364c23a97af4997b1218a426ef17d5912b4a441d0ef69791eff not found: ID does not exist" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916013 4740 scope.go:117] "RemoveContainer" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.916789 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": container with ID starting with 82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a not found: ID does not exist" containerID="82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916842 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a"} err="failed to get container status \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": rpc error: code = NotFound desc = could not find container \"82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a\": container with ID starting with 82ce86589173242beb858d45c5505bc4d89860d379277d841680cff941c6e55a not found: ID does not exist" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.916869 4740 scope.go:117] "RemoveContainer" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: E0216 13:20:24.917167 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": container with ID starting with 9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d not found: ID does not exist" containerID="9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d" Feb 16 13:20:24 crc kubenswrapper[4740]: I0216 13:20:24.917204 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d"} err="failed to get container status \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": rpc error: code = NotFound desc = could not find container \"9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d\": container with ID starting with 9e57477e26c78168e4a22b3841c1ec21e9f33daa0ef59714adf427c32c34306d not found: ID does not exist" Feb 16 13:20:25 crc kubenswrapper[4740]: I0216 13:20:25.293061 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" path="/var/lib/kubelet/pods/f03eabf3-cb8f-4391-bafc-374ea00b3058/volumes" Feb 16 13:20:26 crc kubenswrapper[4740]: I0216 13:20:26.281838 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:26 crc kubenswrapper[4740]: E0216 13:20:26.282716 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:32 crc kubenswrapper[4740]: I0216 13:20:32.051045 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:20:32 crc kubenswrapper[4740]: I0216 13:20:32.061081 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2hxgr"] Feb 16 13:20:33 crc kubenswrapper[4740]: I0216 13:20:33.291763 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6806e6-e7ab-40bb-a703-0f4bfe131539" path="/var/lib/kubelet/pods/6e6806e6-e7ab-40bb-a703-0f4bfe131539/volumes" Feb 16 13:20:40 crc kubenswrapper[4740]: I0216 13:20:40.282644 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:40 crc kubenswrapper[4740]: E0216 13:20:40.283559 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:20:55 crc kubenswrapper[4740]: I0216 13:20:55.281487 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:20:55 crc kubenswrapper[4740]: E0216 13:20:55.282324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.042729 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.055908 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.068971 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.075657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.083720 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.090891 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.097070 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-r46m7"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.103820 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ctmrz"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.110393 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jfqz9"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.118739 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-877a-account-create-update-w87w5"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.125776 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9f9b-account-create-update-rc9td"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.132354 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f98-account-create-update-77pz4"] Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.306670 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf48619-6b39-4215-950a-f8da809dcc11" path="/var/lib/kubelet/pods/0bf48619-6b39-4215-950a-f8da809dcc11/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.307728 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc528c1-14c9-4bb4-a6f8-621fc066e98a" path="/var/lib/kubelet/pods/2dc528c1-14c9-4bb4-a6f8-621fc066e98a/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.308438 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93273db-db1d-4c4b-85ad-2d87065c42f4" path="/var/lib/kubelet/pods/b93273db-db1d-4c4b-85ad-2d87065c42f4/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.309022 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28029f1-eca0-4cd5-95b3-774c21d6d0ed" path="/var/lib/kubelet/pods/c28029f1-eca0-4cd5-95b3-774c21d6d0ed/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.310042 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce83ec9b-39d5-4bf9-b343-d3f06f886841" path="/var/lib/kubelet/pods/ce83ec9b-39d5-4bf9-b343-d3f06f886841/volumes" Feb 16 13:21:03 crc kubenswrapper[4740]: I0216 13:21:03.310562 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ec561b-87d9-418d-9376-c48bb31d46f9" path="/var/lib/kubelet/pods/e2ec561b-87d9-418d-9376-c48bb31d46f9/volumes" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.040435 4740 scope.go:117] "RemoveContainer" containerID="4aa507b0c5065c88dcc09741d4612ac5be715de1d8ac33a2444842a74593667f" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.070363 4740 scope.go:117] "RemoveContainer" containerID="d50713edffd58148f7599a08a7e47edd0028addbe2f11ff9b3ec1d7b2dedaaf8" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.121383 4740 scope.go:117] "RemoveContainer" containerID="1f6ef107bcaf336d76ceed01fca141680bec52be47d97dcef3c48566b0276aa5" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.170469 4740 scope.go:117] "RemoveContainer" containerID="a8954ab70eb71a11cadf0847488023deb3343352dcccef9132db389ddd167a80" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.222570 4740 scope.go:117] "RemoveContainer" containerID="8842820f2cf4ddd2c9503055eef8bb5b04d701bcbb5f9d0e546ae5434e57491e" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.247517 4740 scope.go:117] "RemoveContainer" containerID="04fb5b738af72ba9d62044da274c169ea32070a2cc600c09016c81106717ecdd" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.278700 4740 scope.go:117] "RemoveContainer" containerID="451d00cb28f2393a2b488c082f43cfde6cbce5c2b86f4a3ccf6583823523e02b" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.308136 4740 scope.go:117] "RemoveContainer" containerID="20579605a8e47ed4449e3d674d1bbbbcd44cb3f5f3aba1e332068a4ec56b723d" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.326894 4740 scope.go:117] "RemoveContainer" containerID="344296975e26624ad4cacf476e74a30fa10626ccf25a97f67365e99050dc2e41" Feb 16 13:21:05 crc kubenswrapper[4740]: I0216 13:21:05.344849 4740 scope.go:117] "RemoveContainer" containerID="04f7de9c276248f11e1d14a403f81582e43458c7dfd1d3b8fc3dc8186de0b569" Feb 16 13:21:08 crc kubenswrapper[4740]: I0216 13:21:08.471717 4740 generic.go:334] "Generic (PLEG): container finished" podID="3691fefa-c161-4670-bae7-ddde074e2892" containerID="d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0" exitCode=0 Feb 16 13:21:08 crc kubenswrapper[4740]: I0216 13:21:08.471899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerDied","Data":"d9bfc50642f18cd3bad0f6a96456efca0c8670bd4cddb59f96e902ba917a08e0"} Feb 16 13:21:09 crc kubenswrapper[4740]: I0216 13:21:09.871046 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.018974 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.019105 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.019142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") pod \"3691fefa-c161-4670-bae7-ddde074e2892\" (UID: \"3691fefa-c161-4670-bae7-ddde074e2892\") " Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.023955 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh" (OuterVolumeSpecName: "kube-api-access-9nvhh") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "kube-api-access-9nvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.044510 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.044999 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory" (OuterVolumeSpecName: "inventory") pod "3691fefa-c161-4670-bae7-ddde074e2892" (UID: "3691fefa-c161-4670-bae7-ddde074e2892"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121048 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121084 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvhh\" (UniqueName: \"kubernetes.io/projected/3691fefa-c161-4670-bae7-ddde074e2892-kube-api-access-9nvhh\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.121095 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3691fefa-c161-4670-bae7-ddde074e2892-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.281851 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.282282 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" event={"ID":"3691fefa-c161-4670-bae7-ddde074e2892","Type":"ContainerDied","Data":"cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6"} Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523529 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9e74adf45991a36e82ec73125fa24d4e7afe484c44e3eea437484e318caeb6" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.523547 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.596766 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597234 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597259 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597292 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597301 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597313 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-content" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597320 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-content" Feb 16 13:21:10 crc kubenswrapper[4740]: E0216 13:21:10.597340 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-utilities" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597348 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="extract-utilities" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597607 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3691fefa-c161-4670-bae7-ddde074e2892" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.597653 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03eabf3-cb8f-4391-bafc-374ea00b3058" containerName="registry-server" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.598497 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602358 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602444 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602482 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.602489 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.608641 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630138 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.630167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733186 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733344 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.733413 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.737567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.740652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.756037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w42sv\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:10 crc kubenswrapper[4740]: I0216 13:21:10.912592 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:11 crc kubenswrapper[4740]: I0216 13:21:11.452771 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv"] Feb 16 13:21:11 crc kubenswrapper[4740]: I0216 13:21:11.535186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerStarted","Data":"0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb"} Feb 16 13:21:12 crc kubenswrapper[4740]: I0216 13:21:12.543897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerStarted","Data":"bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5"} Feb 16 13:21:12 crc kubenswrapper[4740]: I0216 13:21:12.566195 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" podStartSLOduration=1.8272391639999999 podStartE2EDuration="2.566173593s" podCreationTimestamp="2026-02-16 13:21:10 +0000 UTC" firstStartedPulling="2026-02-16 13:21:11.459285273 +0000 UTC m=+1698.835633994" lastFinishedPulling="2026-02-16 13:21:12.198219702 +0000 UTC m=+1699.574568423" observedRunningTime="2026-02-16 13:21:12.558873184 +0000 UTC m=+1699.935221925" watchObservedRunningTime="2026-02-16 13:21:12.566173593 +0000 UTC m=+1699.942522314" Feb 16 13:21:17 crc kubenswrapper[4740]: I0216 13:21:17.616694 4740 generic.go:334] "Generic (PLEG): container finished" podID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerID="bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5" exitCode=0 Feb 16 13:21:17 crc kubenswrapper[4740]: I0216 13:21:17.617413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerDied","Data":"bef5b8545f936634ce120d772c950924a651bfe93cb7ce6009b4184b4473fef5"} Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.016274 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106053 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106448 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.106542 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") pod \"5add9653-c644-42d7-bd4d-10ecb8f84a90\" (UID: \"5add9653-c644-42d7-bd4d-10ecb8f84a90\") " Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.113444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz" (OuterVolumeSpecName: "kube-api-access-hsngz") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "kube-api-access-hsngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.139041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory" (OuterVolumeSpecName: "inventory") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.153792 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5add9653-c644-42d7-bd4d-10ecb8f84a90" (UID: "5add9653-c644-42d7-bd4d-10ecb8f84a90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.208964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsngz\" (UniqueName: \"kubernetes.io/projected/5add9653-c644-42d7-bd4d-10ecb8f84a90-kube-api-access-hsngz\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.209000 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.209014 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5add9653-c644-42d7-bd4d-10ecb8f84a90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" event={"ID":"5add9653-c644-42d7-bd4d-10ecb8f84a90","Type":"ContainerDied","Data":"0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb"} Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635221 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d02f84743881743dccdc8a675b751fd5ccde625f142ea8fcdac0e776d29f5fb" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.635227 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w42sv" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.713942 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:19 crc kubenswrapper[4740]: E0216 13:21:19.714401 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.714422 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.714593 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5add9653-c644-42d7-bd4d-10ecb8f84a90" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.715282 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.717273 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.718239 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.718484 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.719080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.720675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.729484 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.822838 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.822899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.823007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.826608 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.834708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:19 crc kubenswrapper[4740]: I0216 13:21:19.838792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-42525\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.031169 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.584607 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525"] Feb 16 13:21:20 crc kubenswrapper[4740]: I0216 13:21:20.642325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerStarted","Data":"1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99"} Feb 16 13:21:21 crc kubenswrapper[4740]: I0216 13:21:21.651883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerStarted","Data":"07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e"} Feb 16 13:21:21 crc kubenswrapper[4740]: I0216 13:21:21.666969 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" podStartSLOduration=2.220147077 podStartE2EDuration="2.666950204s" podCreationTimestamp="2026-02-16 13:21:19 +0000 UTC" firstStartedPulling="2026-02-16 13:21:20.599669696 +0000 UTC m=+1707.976018417" lastFinishedPulling="2026-02-16 13:21:21.046472823 +0000 UTC m=+1708.422821544" observedRunningTime="2026-02-16 13:21:21.664757414 +0000 UTC m=+1709.041106135" watchObservedRunningTime="2026-02-16 13:21:21.666950204 +0000 UTC m=+1709.043298925" Feb 16 13:21:22 crc kubenswrapper[4740]: I0216 13:21:22.281980 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:22 crc kubenswrapper[4740]: E0216 13:21:22.282769 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.029537 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.036658 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bfmpm"] Feb 16 13:21:29 crc kubenswrapper[4740]: I0216 13:21:29.292442 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce641e-1b76-4b99-a99d-9a0ccbf9680e" path="/var/lib/kubelet/pods/2fce641e-1b76-4b99-a99d-9a0ccbf9680e/volumes" Feb 16 13:21:33 crc kubenswrapper[4740]: I0216 13:21:33.284724 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:33 crc kubenswrapper[4740]: E0216 13:21:33.285391 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:46 crc kubenswrapper[4740]: I0216 13:21:46.281886 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:46 crc kubenswrapper[4740]: E0216 13:21:46.283178 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.281135 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:21:57 crc kubenswrapper[4740]: E0216 13:21:57.282116 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.993980 4740 generic.go:334] "Generic (PLEG): container finished" podID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerID="07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e" exitCode=0 Feb 16 13:21:57 crc kubenswrapper[4740]: I0216 13:21:57.994127 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerDied","Data":"07da2204c2e49d8d3b365a1c53b0b89995daf3879261ed3725095ccd68314b8e"} Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.416122 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.491932 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.492206 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.492401 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") pod \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\" (UID: \"bf3c8754-68ef-4956-a95b-c6751d81b5bf\") " Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.499002 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj" (OuterVolumeSpecName: "kube-api-access-6vnzj") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "kube-api-access-6vnzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.521154 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.524912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory" (OuterVolumeSpecName: "inventory") pod "bf3c8754-68ef-4956-a95b-c6751d81b5bf" (UID: "bf3c8754-68ef-4956-a95b-c6751d81b5bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595125 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595160 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnzj\" (UniqueName: \"kubernetes.io/projected/bf3c8754-68ef-4956-a95b-c6751d81b5bf-kube-api-access-6vnzj\") on node \"crc\" DevicePath \"\"" Feb 16 13:21:59 crc kubenswrapper[4740]: I0216 13:21:59.595173 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf3c8754-68ef-4956-a95b-c6751d81b5bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010872 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" event={"ID":"bf3c8754-68ef-4956-a95b-c6751d81b5bf","Type":"ContainerDied","Data":"1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99"} Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010925 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a26d829fd6d458a54d1ced82465a29d890bcbfbc93a8e2da8f5d87f651d6b99" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.010924 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-42525" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.105328 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:00 crc kubenswrapper[4740]: E0216 13:22:00.106017 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106040 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106220 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3c8754-68ef-4956-a95b-c6751d81b5bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.106890 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109530 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.109685 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.110291 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.116084 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209275 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.209329 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.310772 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.316506 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.319581 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.332198 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.427244 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:00 crc kubenswrapper[4740]: I0216 13:22:00.933040 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw"] Feb 16 13:22:01 crc kubenswrapper[4740]: I0216 13:22:01.020533 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerStarted","Data":"1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56"} Feb 16 13:22:02 crc kubenswrapper[4740]: I0216 13:22:02.034013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerStarted","Data":"499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374"} Feb 16 13:22:02 crc kubenswrapper[4740]: I0216 13:22:02.071394 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" podStartSLOduration=1.656462229 podStartE2EDuration="2.071359125s" podCreationTimestamp="2026-02-16 13:22:00 +0000 UTC" firstStartedPulling="2026-02-16 13:22:00.938716286 +0000 UTC m=+1748.315065017" lastFinishedPulling="2026-02-16 13:22:01.353613182 +0000 UTC m=+1748.729961913" observedRunningTime="2026-02-16 13:22:02.064234712 +0000 UTC m=+1749.440583453" watchObservedRunningTime="2026-02-16 13:22:02.071359125 +0000 UTC m=+1749.447707886" Feb 16 13:22:05 crc kubenswrapper[4740]: I0216 13:22:05.542535 4740 scope.go:117] "RemoveContainer" containerID="7dd2f91b7b77a95d9efed18149d982eaea3f14083dd0271b85d27533a0b37d3b" Feb 16 13:22:10 crc kubenswrapper[4740]: I0216 13:22:10.282313 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:10 crc kubenswrapper[4740]: E0216 13:22:10.283100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:24 crc kubenswrapper[4740]: I0216 13:22:24.280787 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:24 crc kubenswrapper[4740]: E0216 13:22:24.281651 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.043387 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.059558 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8f7gd"] Feb 16 13:22:25 crc kubenswrapper[4740]: I0216 13:22:25.292404 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4deadb-18ac-4d06-ba22-e391b19d38cd" path="/var/lib/kubelet/pods/9f4deadb-18ac-4d06-ba22-e391b19d38cd/volumes" Feb 16 13:22:26 crc kubenswrapper[4740]: I0216 13:22:26.028047 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:22:26 crc kubenswrapper[4740]: I0216 13:22:26.039269 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8crw8"] Feb 16 13:22:27 crc kubenswrapper[4740]: I0216 13:22:27.300406 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975c922d-b91a-4cf6-9739-0d478d19765a" path="/var/lib/kubelet/pods/975c922d-b91a-4cf6-9739-0d478d19765a/volumes" Feb 16 13:22:37 crc kubenswrapper[4740]: I0216 13:22:37.281921 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:37 crc kubenswrapper[4740]: E0216 13:22:37.282749 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:46 crc kubenswrapper[4740]: I0216 13:22:46.433674 4740 generic.go:334] "Generic (PLEG): container finished" podID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerID="499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374" exitCode=0 Feb 16 13:22:46 crc kubenswrapper[4740]: I0216 13:22:46.433776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerDied","Data":"499171f3d6accb7f214514aefc2442a0232182e193c44f41ec436d911ae03374"} Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.865920 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992361 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:47 crc kubenswrapper[4740]: I0216 13:22:47.992478 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") pod \"928b9f1f-3a42-47e3-b895-756f66452ebf\" (UID: \"928b9f1f-3a42-47e3-b895-756f66452ebf\") " Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.000297 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8" (OuterVolumeSpecName: "kube-api-access-5tjc8") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "kube-api-access-5tjc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.028872 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.030845 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory" (OuterVolumeSpecName: "inventory") pod "928b9f1f-3a42-47e3-b895-756f66452ebf" (UID: "928b9f1f-3a42-47e3-b895-756f66452ebf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094509 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094550 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/928b9f1f-3a42-47e3-b895-756f66452ebf-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.094560 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tjc8\" (UniqueName: \"kubernetes.io/projected/928b9f1f-3a42-47e3-b895-756f66452ebf-kube-api-access-5tjc8\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" event={"ID":"928b9f1f-3a42-47e3-b895-756f66452ebf","Type":"ContainerDied","Data":"1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56"} Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461507 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.461524 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.632882 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:48 crc kubenswrapper[4740]: E0216 13:22:48.633431 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.633450 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.633656 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="928b9f1f-3a42-47e3-b895-756f66452ebf" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.634414 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639224 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639455 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639583 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.639724 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.659991 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:48 crc kubenswrapper[4740]: E0216 13:22:48.797259 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928b9f1f_3a42_47e3_b895_756f66452ebf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928b9f1f_3a42_47e3_b895_756f66452ebf.slice/crio-1795fdd11494c7bc2c48a30d033cb8a34856c810f3231d1c2aca4738cf874f56\": RecentStats: unable to find data in memory cache]" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812075 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812474 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.812650 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916031 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.916792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.925437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.931762 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.938840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"ssh-known-hosts-edpm-deployment-87s8t\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:48 crc kubenswrapper[4740]: I0216 13:22:48.997824 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:49 crc kubenswrapper[4740]: I0216 13:22:49.281590 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:22:49 crc kubenswrapper[4740]: E0216 13:22:49.282018 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:22:50 crc kubenswrapper[4740]: I0216 13:22:50.403118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-87s8t"] Feb 16 13:22:50 crc kubenswrapper[4740]: I0216 13:22:50.476545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerStarted","Data":"53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956"} Feb 16 13:22:51 crc kubenswrapper[4740]: I0216 13:22:51.487607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerStarted","Data":"4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f"} Feb 16 13:22:51 crc kubenswrapper[4740]: I0216 13:22:51.511632 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" podStartSLOduration=3.104791047 podStartE2EDuration="3.51161403s" podCreationTimestamp="2026-02-16 13:22:48 +0000 UTC" firstStartedPulling="2026-02-16 13:22:50.391582036 +0000 UTC m=+1797.767930757" lastFinishedPulling="2026-02-16 13:22:50.798405019 +0000 UTC m=+1798.174753740" observedRunningTime="2026-02-16 13:22:51.505342593 +0000 UTC m=+1798.881691324" watchObservedRunningTime="2026-02-16 13:22:51.51161403 +0000 UTC m=+1798.887962761" Feb 16 13:22:57 crc kubenswrapper[4740]: I0216 13:22:57.541333 4740 generic.go:334] "Generic (PLEG): container finished" podID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerID="4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f" exitCode=0 Feb 16 13:22:57 crc kubenswrapper[4740]: I0216 13:22:57.541361 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerDied","Data":"4c42dc94e717de546ca05b1432551cfbc5b059948a6a080690af8ae263dbce4f"} Feb 16 13:22:58 crc kubenswrapper[4740]: I0216 13:22:58.996324 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105261 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.105336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") pod \"8c5c2438-cfba-41a9-b429-80c9ce563348\" (UID: \"8c5c2438-cfba-41a9-b429-80c9ce563348\") " Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.110559 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2" (OuterVolumeSpecName: "kube-api-access-j5xt2") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "kube-api-access-j5xt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.131414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.138009 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8c5c2438-cfba-41a9-b429-80c9ce563348" (UID: "8c5c2438-cfba-41a9-b429-80c9ce563348"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208509 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xt2\" (UniqueName: \"kubernetes.io/projected/8c5c2438-cfba-41a9-b429-80c9ce563348-kube-api-access-j5xt2\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208569 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.208594 4740 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8c5c2438-cfba-41a9-b429-80c9ce563348-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558473 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" event={"ID":"8c5c2438-cfba-41a9-b429-80c9ce563348","Type":"ContainerDied","Data":"53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956"} Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558747 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53633dd17b1c89afe1c2983f2a35b9d61e7cd61ad850690674ffa04e3d2b3956" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.558528 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-87s8t" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.648094 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:22:59 crc kubenswrapper[4740]: E0216 13:22:59.648711 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.648795 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.649097 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5c2438-cfba-41a9-b429-80c9ce563348" containerName="ssh-known-hosts-edpm-deployment" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.649740 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654158 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654409 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.654672 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.673334 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.725466 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.826987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.831223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.835650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.850370 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-r8mds\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:22:59 crc kubenswrapper[4740]: I0216 13:22:59.968024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:00 crc kubenswrapper[4740]: I0216 13:23:00.539208 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds"] Feb 16 13:23:00 crc kubenswrapper[4740]: I0216 13:23:00.569758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerStarted","Data":"2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5"} Feb 16 13:23:01 crc kubenswrapper[4740]: I0216 13:23:01.583798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerStarted","Data":"ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b"} Feb 16 13:23:01 crc kubenswrapper[4740]: I0216 13:23:01.601749 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" podStartSLOduration=2.126018965 podStartE2EDuration="2.60172619s" podCreationTimestamp="2026-02-16 13:22:59 +0000 UTC" firstStartedPulling="2026-02-16 13:23:00.553293315 +0000 UTC m=+1807.929642036" lastFinishedPulling="2026-02-16 13:23:01.02900053 +0000 UTC m=+1808.405349261" observedRunningTime="2026-02-16 13:23:01.599735658 +0000 UTC m=+1808.976084379" watchObservedRunningTime="2026-02-16 13:23:01.60172619 +0000 UTC m=+1808.978074911" Feb 16 13:23:03 crc kubenswrapper[4740]: I0216 13:23:03.289806 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:23:03 crc kubenswrapper[4740]: E0216 13:23:03.290433 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:23:05 crc kubenswrapper[4740]: I0216 13:23:05.622374 4740 scope.go:117] "RemoveContainer" containerID="2b6d6cab54020a43fe901cf5d2e1f8b359bd5539376cbb46667ef99bd2c317e5" Feb 16 13:23:05 crc kubenswrapper[4740]: I0216 13:23:05.662215 4740 scope.go:117] "RemoveContainer" containerID="0919266d903860a70c30148766f4b23532edcd71ec7a5ae724fd4a59f06dffc4" Feb 16 13:23:09 crc kubenswrapper[4740]: I0216 13:23:09.645790 4740 generic.go:334] "Generic (PLEG): container finished" podID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerID="ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b" exitCode=0 Feb 16 13:23:09 crc kubenswrapper[4740]: I0216 13:23:09.646369 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerDied","Data":"ea8d927a6e9655274266da02ef4e31b3f1b7918a417e6263a62fb6f4ffefe22b"} Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.041497 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.050060 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9964"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.105080 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245466 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.245850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") pod \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\" (UID: \"981b1e60-57d5-4a6b-8531-3fd31dd46fa5\") " Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.250581 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm" (OuterVolumeSpecName: "kube-api-access-qzxtm") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "kube-api-access-qzxtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.286424 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory" (OuterVolumeSpecName: "inventory") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.305282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "981b1e60-57d5-4a6b-8531-3fd31dd46fa5" (UID: "981b1e60-57d5-4a6b-8531-3fd31dd46fa5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.305454 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798bf8e1-4a33-48eb-bbb3-9be8d38027de" path="/var/lib/kubelet/pods/798bf8e1-4a33-48eb-bbb3-9be8d38027de/volumes" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348495 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348544 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzxtm\" (UniqueName: \"kubernetes.io/projected/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-kube-api-access-qzxtm\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.348556 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/981b1e60-57d5-4a6b-8531-3fd31dd46fa5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.666157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" event={"ID":"981b1e60-57d5-4a6b-8531-3fd31dd46fa5","Type":"ContainerDied","Data":"2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5"} Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.666219 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dab8ecafd1d95f9d8e6fdf04fc6bdd21d98e80d91e7dfc1d0f8ac0940f8b8e5" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.667990 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-r8mds" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.775262 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:11 crc kubenswrapper[4740]: E0216 13:23:11.776524 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.776568 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.777255 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="981b1e60-57d5-4a6b-8531-3fd31dd46fa5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.778116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781872 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781599 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.781740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.789242 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.861409 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.861471 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.862013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963251 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.963501 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.967254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.967268 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:11 crc kubenswrapper[4740]: I0216 13:23:11.980713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.098341 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.641480 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g"] Feb 16 13:23:12 crc kubenswrapper[4740]: I0216 13:23:12.681712 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerStarted","Data":"95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633"} Feb 16 13:23:13 crc kubenswrapper[4740]: I0216 13:23:13.692151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerStarted","Data":"05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f"} Feb 16 13:23:13 crc kubenswrapper[4740]: I0216 13:23:13.716750 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" podStartSLOduration=2.158284335 podStartE2EDuration="2.716724373s" podCreationTimestamp="2026-02-16 13:23:11 +0000 UTC" firstStartedPulling="2026-02-16 13:23:12.64154865 +0000 UTC m=+1820.017897371" lastFinishedPulling="2026-02-16 13:23:13.199988688 +0000 UTC m=+1820.576337409" observedRunningTime="2026-02-16 13:23:13.713401099 +0000 UTC m=+1821.089749840" watchObservedRunningTime="2026-02-16 13:23:13.716724373 +0000 UTC m=+1821.093073094" Feb 16 13:23:18 crc kubenswrapper[4740]: I0216 13:23:18.281631 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:23:18 crc kubenswrapper[4740]: I0216 13:23:18.747585 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} Feb 16 13:23:24 crc kubenswrapper[4740]: I0216 13:23:24.965118 4740 generic.go:334] "Generic (PLEG): container finished" podID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerID="05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f" exitCode=0 Feb 16 13:23:24 crc kubenswrapper[4740]: I0216 13:23:24.965352 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerDied","Data":"05e652519318768d325a754b0bb1e53b51234bae739c6c37ba79de488ea96f8f"} Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.410904 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478617 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478782 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.478906 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") pod \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\" (UID: \"9fa622a2-4774-4038-b9ec-ec4bc7f57a46\") " Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.490070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67" (OuterVolumeSpecName: "kube-api-access-29d67") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "kube-api-access-29d67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.507425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.522160 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory" (OuterVolumeSpecName: "inventory") pod "9fa622a2-4774-4038-b9ec-ec4bc7f57a46" (UID: "9fa622a2-4774-4038-b9ec-ec4bc7f57a46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581033 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29d67\" (UniqueName: \"kubernetes.io/projected/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-kube-api-access-29d67\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581083 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.581095 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fa622a2-4774-4038-b9ec-ec4bc7f57a46-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" event={"ID":"9fa622a2-4774-4038-b9ec-ec4bc7f57a46","Type":"ContainerDied","Data":"95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633"} Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981311 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95261fc9abbee3973a49eb1f3537faebfcdc2586215c5a3d4d0981cc45f96633" Feb 16 13:23:26 crc kubenswrapper[4740]: I0216 13:23:26.981324 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.064222 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: E0216 13:23:27.064710 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.064739 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.065017 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa622a2-4774-4038-b9ec-ec4bc7f57a46" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.065781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.071698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.071943 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072125 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072305 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072500 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072548 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.072601 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.079608 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.081621 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191640 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191827 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.191908 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192048 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192259 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192367 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.192590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.293871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294195 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.294541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.303679 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.304092 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.304497 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.305577 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306162 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306305 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.306872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.308139 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.308739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.309783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.315168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.315207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.322120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.418211 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.964469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh"] Feb 16 13:23:27 crc kubenswrapper[4740]: W0216 13:23:27.974062 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e117ddc_9ff8_414d_859b_0a16b4846029.slice/crio-dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8 WatchSource:0}: Error finding container dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8: Status 404 returned error can't find the container with id dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8 Feb 16 13:23:27 crc kubenswrapper[4740]: I0216 13:23:27.990667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerStarted","Data":"dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8"} Feb 16 13:23:29 crc kubenswrapper[4740]: I0216 13:23:29.000225 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerStarted","Data":"4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50"} Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.877442 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" podStartSLOduration=17.472736127 podStartE2EDuration="17.877423389s" podCreationTimestamp="2026-02-16 13:23:27 +0000 UTC" firstStartedPulling="2026-02-16 13:23:27.975885843 +0000 UTC m=+1835.352234564" lastFinishedPulling="2026-02-16 13:23:28.380573105 +0000 UTC m=+1835.756921826" observedRunningTime="2026-02-16 13:23:29.030142671 +0000 UTC m=+1836.406491392" watchObservedRunningTime="2026-02-16 13:23:44.877423389 +0000 UTC m=+1852.253772110" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.881348 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.885146 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.893629 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970031 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970265 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:44 crc kubenswrapper[4740]: I0216 13:23:44.970542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.071989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.072588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.092338 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"redhat-operators-bc555\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.265665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:45 crc kubenswrapper[4740]: I0216 13:23:45.731392 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130244 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" exitCode=0 Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7"} Feb 16 13:23:46 crc kubenswrapper[4740]: I0216 13:23:46.130325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"82656086a4a118e1102fc82f336330b684a7f0526916306ff71cb6795b454ce4"} Feb 16 13:23:48 crc kubenswrapper[4740]: I0216 13:23:48.146121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} Feb 16 13:23:50 crc kubenswrapper[4740]: I0216 13:23:50.164433 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" exitCode=0 Feb 16 13:23:50 crc kubenswrapper[4740]: I0216 13:23:50.164520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} Feb 16 13:23:51 crc kubenswrapper[4740]: I0216 13:23:51.176491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerStarted","Data":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} Feb 16 13:23:51 crc kubenswrapper[4740]: I0216 13:23:51.198525 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bc555" podStartSLOduration=2.792385574 podStartE2EDuration="7.198505745s" podCreationTimestamp="2026-02-16 13:23:44 +0000 UTC" firstStartedPulling="2026-02-16 13:23:46.131736411 +0000 UTC m=+1853.508085132" lastFinishedPulling="2026-02-16 13:23:50.537856582 +0000 UTC m=+1857.914205303" observedRunningTime="2026-02-16 13:23:51.190183705 +0000 UTC m=+1858.566532426" watchObservedRunningTime="2026-02-16 13:23:51.198505745 +0000 UTC m=+1858.574854466" Feb 16 13:23:55 crc kubenswrapper[4740]: I0216 13:23:55.266688 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:55 crc kubenswrapper[4740]: I0216 13:23:55.267269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:23:56 crc kubenswrapper[4740]: I0216 13:23:56.313482 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bc555" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" probeResult="failure" output=< Feb 16 13:23:56 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:23:56 crc kubenswrapper[4740]: > Feb 16 13:24:04 crc kubenswrapper[4740]: I0216 13:24:04.302078 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerID="4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50" exitCode=0 Feb 16 13:24:04 crc kubenswrapper[4740]: I0216 13:24:04.302175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerDied","Data":"4851cb174dad6fed04b8c1e5aca0598cfd1f61a411fcaef18bddef0d653c4a50"} Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.340141 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.407167 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.584236 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.709764 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.758219 4740 scope.go:117] "RemoveContainer" containerID="88c2737be162f9ce8e9f65cc3abfbf6976ffda5494fd0d0154f6ce2147c27b29" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775236 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775264 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.775293 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776319 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776444 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776473 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776523 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776546 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.776648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") pod \"3e117ddc-9ff8-414d-859b-0a16b4846029\" (UID: \"3e117ddc-9ff8-414d-859b-0a16b4846029\") " Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.783825 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784380 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784693 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.784746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.785773 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788527 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.788852 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.789359 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.789546 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.790985 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.795290 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt" (OuterVolumeSpecName: "kube-api-access-x9mjt") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "kube-api-access-x9mjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.808649 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.816579 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory" (OuterVolumeSpecName: "inventory") pod "3e117ddc-9ff8-414d-859b-0a16b4846029" (UID: "3e117ddc-9ff8-414d-859b-0a16b4846029"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879194 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879226 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879238 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879248 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879256 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879265 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879276 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879284 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879294 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879303 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879311 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mjt\" (UniqueName: \"kubernetes.io/projected/3e117ddc-9ff8-414d-859b-0a16b4846029-kube-api-access-x9mjt\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879321 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879330 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:05 crc kubenswrapper[4740]: I0216 13:24:05.879337 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e117ddc-9ff8-414d-859b-0a16b4846029-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" event={"ID":"3e117ddc-9ff8-414d-859b-0a16b4846029","Type":"ContainerDied","Data":"dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8"} Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328750 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae8e8fc85986ae57707718c7d3b7b2fcea041ebd383b8f7c4cb0210b9a8f5a8" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.328706 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.460110 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:06 crc kubenswrapper[4740]: E0216 13:24:06.460684 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.460703 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.461002 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e117ddc-9ff8-414d-859b-0a16b4846029" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.461902 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469678 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469699 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.469962 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.470109 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.470133 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.477071 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600877 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.600913 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702680 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.702841 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.703703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.708443 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.711460 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.713761 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.742431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zzdbk\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:06 crc kubenswrapper[4740]: I0216 13:24:06.790331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.338176 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bc555" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" containerID="cri-o://d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" gracePeriod=2 Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.399359 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk"] Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.797426 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925637 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925766 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.925873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") pod \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\" (UID: \"f7370a76-dcf5-4db7-b2b8-7a142cbae00d\") " Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.926605 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities" (OuterVolumeSpecName: "utilities") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:24:07 crc kubenswrapper[4740]: I0216 13:24:07.929162 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8" (OuterVolumeSpecName: "kube-api-access-fdfl8") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "kube-api-access-fdfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.028071 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.028249 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdfl8\" (UniqueName: \"kubernetes.io/projected/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-kube-api-access-fdfl8\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.082661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7370a76-dcf5-4db7-b2b8-7a142cbae00d" (UID: "f7370a76-dcf5-4db7-b2b8-7a142cbae00d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.129503 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7370a76-dcf5-4db7-b2b8-7a142cbae00d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.348974 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" exitCode=0 Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349098 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bc555" event={"ID":"f7370a76-dcf5-4db7-b2b8-7a142cbae00d","Type":"ContainerDied","Data":"82656086a4a118e1102fc82f336330b684a7f0526916306ff71cb6795b454ce4"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349112 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bc555" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.349132 4740 scope.go:117] "RemoveContainer" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.354210 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerStarted","Data":"def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.354252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerStarted","Data":"0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124"} Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.379162 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" podStartSLOduration=1.880330781 podStartE2EDuration="2.379125514s" podCreationTimestamp="2026-02-16 13:24:06 +0000 UTC" firstStartedPulling="2026-02-16 13:24:07.41837076 +0000 UTC m=+1874.794719491" lastFinishedPulling="2026-02-16 13:24:07.917165503 +0000 UTC m=+1875.293514224" observedRunningTime="2026-02-16 13:24:08.368186753 +0000 UTC m=+1875.744535474" watchObservedRunningTime="2026-02-16 13:24:08.379125514 +0000 UTC m=+1875.755474235" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.383100 4740 scope.go:117] "RemoveContainer" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.403551 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.411445 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bc555"] Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.417640 4740 scope.go:117] "RemoveContainer" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.443667 4740 scope.go:117] "RemoveContainer" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.450545 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": container with ID starting with d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2 not found: ID does not exist" containerID="d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.450617 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2"} err="failed to get container status \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": rpc error: code = NotFound desc = could not find container \"d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2\": container with ID starting with d2261b1448b6bc698be532b8b915ebb9211daeac888d567ac35848758aa3e8a2 not found: ID does not exist" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.450655 4740 scope.go:117] "RemoveContainer" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.451172 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": container with ID starting with 09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d not found: ID does not exist" containerID="09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451239 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d"} err="failed to get container status \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": rpc error: code = NotFound desc = could not find container \"09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d\": container with ID starting with 09421f7739aa07c18bcbbe8829e978ce0cedd59174775435f5fe0feda63b460d not found: ID does not exist" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451274 4740 scope.go:117] "RemoveContainer" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: E0216 13:24:08.451738 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": container with ID starting with b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7 not found: ID does not exist" containerID="b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7" Feb 16 13:24:08 crc kubenswrapper[4740]: I0216 13:24:08.451771 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7"} err="failed to get container status \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": rpc error: code = NotFound desc = could not find container \"b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7\": container with ID starting with b5cc3168bed59b337440fd344c7edeb920f3cea78451166e6a9e2f4f92b7f5c7 not found: ID does not exist" Feb 16 13:24:09 crc kubenswrapper[4740]: I0216 13:24:09.294930 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" path="/var/lib/kubelet/pods/f7370a76-dcf5-4db7-b2b8-7a142cbae00d/volumes" Feb 16 13:25:08 crc kubenswrapper[4740]: I0216 13:25:08.911476 4740 generic.go:334] "Generic (PLEG): container finished" podID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerID="def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04" exitCode=0 Feb 16 13:25:08 crc kubenswrapper[4740]: I0216 13:25:08.911574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerDied","Data":"def6f897d9c7720679dd57aff0afc546f1e90066050f43d185ff1d14432ddf04"} Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.296337 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416428 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416802 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416888 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.416977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.417100 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") pod \"d66e0695-3544-4fd0-9d34-42bea96ea9de\" (UID: \"d66e0695-3544-4fd0-9d34-42bea96ea9de\") " Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.423431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.423554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4" (OuterVolumeSpecName: "kube-api-access-swbb4") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "kube-api-access-swbb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.446530 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.447956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory" (OuterVolumeSpecName: "inventory") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.456040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d66e0695-3544-4fd0-9d34-42bea96ea9de" (UID: "d66e0695-3544-4fd0-9d34-42bea96ea9de"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520210 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520264 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520278 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swbb4\" (UniqueName: \"kubernetes.io/projected/d66e0695-3544-4fd0-9d34-42bea96ea9de-kube-api-access-swbb4\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520289 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66e0695-3544-4fd0-9d34-42bea96ea9de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.520318 4740 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d66e0695-3544-4fd0-9d34-42bea96ea9de-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" event={"ID":"d66e0695-3544-4fd0-9d34-42bea96ea9de","Type":"ContainerDied","Data":"0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124"} Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934670 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1a54984c0f19763fdb25f3b6927c9500467610413a5774582507a2eac16124" Feb 16 13:25:10 crc kubenswrapper[4740]: I0216 13:25:10.934701 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zzdbk" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032150 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032492 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032508 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032531 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-content" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032537 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-content" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-utilities" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032551 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="extract-utilities" Feb 16 13:25:11 crc kubenswrapper[4740]: E0216 13:25:11.032574 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032579 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032735 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66e0695-3544-4fd0-9d34-42bea96ea9de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.032747 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7370a76-dcf5-4db7-b2b8-7a142cbae00d" containerName="registry-server" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.033331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040406 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040690 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.040938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.041066 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.041202 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.042278 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.051115 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.131708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233596 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.233849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.234496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.234547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.237984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.238302 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.238722 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.245287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.245562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.250851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.364935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.854650 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w"] Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.863801 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:25:11 crc kubenswrapper[4740]: I0216 13:25:11.943575 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerStarted","Data":"d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0"} Feb 16 13:25:12 crc kubenswrapper[4740]: I0216 13:25:12.954570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerStarted","Data":"d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc"} Feb 16 13:25:12 crc kubenswrapper[4740]: I0216 13:25:12.985022 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" podStartSLOduration=1.54363062 podStartE2EDuration="1.984996287s" podCreationTimestamp="2026-02-16 13:25:11 +0000 UTC" firstStartedPulling="2026-02-16 13:25:11.863424126 +0000 UTC m=+1939.239772857" lastFinishedPulling="2026-02-16 13:25:12.304789803 +0000 UTC m=+1939.681138524" observedRunningTime="2026-02-16 13:25:12.980057393 +0000 UTC m=+1940.356406144" watchObservedRunningTime="2026-02-16 13:25:12.984996287 +0000 UTC m=+1940.361345048" Feb 16 13:25:45 crc kubenswrapper[4740]: I0216 13:25:45.575947 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:25:45 crc kubenswrapper[4740]: I0216 13:25:45.578790 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:25:58 crc kubenswrapper[4740]: I0216 13:25:58.401579 4740 generic.go:334] "Generic (PLEG): container finished" podID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerID="d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc" exitCode=0 Feb 16 13:25:58 crc kubenswrapper[4740]: I0216 13:25:58.401661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerDied","Data":"d8bb03661c90a47850c899f4cabcc8ce7bab8422fa528702cfdf8cd3447643dc"} Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.815994 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891699 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.891909 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") pod \"3a7cecfd-1168-4187-a70c-7b2151ff214f\" (UID: \"3a7cecfd-1168-4187-a70c-7b2151ff214f\") " Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.897208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj" (OuterVolumeSpecName: "kube-api-access-q7vfj") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "kube-api-access-q7vfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.897420 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.917452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory" (OuterVolumeSpecName: "inventory") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.919353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.920447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.923372 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3a7cecfd-1168-4187-a70c-7b2151ff214f" (UID: "3a7cecfd-1168-4187-a70c-7b2151ff214f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994109 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vfj\" (UniqueName: \"kubernetes.io/projected/3a7cecfd-1168-4187-a70c-7b2151ff214f-kube-api-access-q7vfj\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994136 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994146 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994156 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994164 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:25:59 crc kubenswrapper[4740]: I0216 13:25:59.994176 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a7cecfd-1168-4187-a70c-7b2151ff214f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" event={"ID":"3a7cecfd-1168-4187-a70c-7b2151ff214f","Type":"ContainerDied","Data":"d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0"} Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422449 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62b36a9918816bc2b3ac769046ef753b4bbbfcf769e0afedd743b11e8af00b0" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.422469 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.528780 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:00 crc kubenswrapper[4740]: E0216 13:26:00.530388 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.530466 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.530787 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7cecfd-1168-4187-a70c-7b2151ff214f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.531402 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.533619 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.533652 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.534027 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.534215 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.535080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.549566 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.606600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.707881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708169 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.708488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.713037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.714273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.718613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.718853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.725110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjh65\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:00 crc kubenswrapper[4740]: I0216 13:26:00.847572 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:26:01 crc kubenswrapper[4740]: I0216 13:26:01.396461 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65"] Feb 16 13:26:01 crc kubenswrapper[4740]: W0216 13:26:01.400714 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ab3e576_ab98_496c_a189_2e79796f9e98.slice/crio-6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7 WatchSource:0}: Error finding container 6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7: Status 404 returned error can't find the container with id 6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7 Feb 16 13:26:01 crc kubenswrapper[4740]: I0216 13:26:01.438353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerStarted","Data":"6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7"} Feb 16 13:26:02 crc kubenswrapper[4740]: I0216 13:26:02.454785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerStarted","Data":"00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7"} Feb 16 13:26:02 crc kubenswrapper[4740]: I0216 13:26:02.480625 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" podStartSLOduration=2.058793017 podStartE2EDuration="2.480591322s" podCreationTimestamp="2026-02-16 13:26:00 +0000 UTC" firstStartedPulling="2026-02-16 13:26:01.405687549 +0000 UTC m=+1988.782036270" lastFinishedPulling="2026-02-16 13:26:01.827485854 +0000 UTC m=+1989.203834575" observedRunningTime="2026-02-16 13:26:02.472962213 +0000 UTC m=+1989.849310934" watchObservedRunningTime="2026-02-16 13:26:02.480591322 +0000 UTC m=+1989.856940033" Feb 16 13:26:15 crc kubenswrapper[4740]: I0216 13:26:15.574746 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:26:15 crc kubenswrapper[4740]: I0216 13:26:15.575220 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.575437 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.576534 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.576892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.578177 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.578275 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" gracePeriod=600 Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898082 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" exitCode=0 Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898156 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239"} Feb 16 13:26:45 crc kubenswrapper[4740]: I0216 13:26:45.898504 4740 scope.go:117] "RemoveContainer" containerID="d7f647f5855835f1f84f8e4f03eea4deacf1e12bbe2c19ebe43188cc7bef5d9e" Feb 16 13:26:46 crc kubenswrapper[4740]: I0216 13:26:46.909581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} Feb 16 13:28:45 crc kubenswrapper[4740]: I0216 13:28:45.575544 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:28:45 crc kubenswrapper[4740]: I0216 13:28:45.576157 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:15 crc kubenswrapper[4740]: I0216 13:29:15.575431 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:29:15 crc kubenswrapper[4740]: I0216 13:29:15.575996 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.574971 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.575760 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.575865 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.577141 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:29:45 crc kubenswrapper[4740]: I0216 13:29:45.577267 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" gracePeriod=600 Feb 16 13:29:45 crc kubenswrapper[4740]: E0216 13:29:45.699866 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632395 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" exitCode=0 Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea"} Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.632761 4740 scope.go:117] "RemoveContainer" containerID="6250ac33711e1fb09c10c905036f4991b8824d9cd6153cb626730b0836a01239" Feb 16 13:29:46 crc kubenswrapper[4740]: I0216 13:29:46.633446 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:29:46 crc kubenswrapper[4740]: E0216 13:29:46.633678 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.373015 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.376927 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.390655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.467425 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.467783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.468448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570257 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.570996 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.571061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.593774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"community-operators-brjxc\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:50 crc kubenswrapper[4740]: I0216 13:29:50.725651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.238448 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.683724 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" exitCode=0 Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.683842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4"} Feb 16 13:29:51 crc kubenswrapper[4740]: I0216 13:29:51.685104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerStarted","Data":"3bb3773605d31d7edb6a9738bbc759b67a01af96bc53a8aeae49b88045e7b813"} Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.707468 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" exitCode=0 Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.707567 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82"} Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.712580 4740 generic.go:334] "Generic (PLEG): container finished" podID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerID="00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7" exitCode=0 Feb 16 13:29:53 crc kubenswrapper[4740]: I0216 13:29:53.712642 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerDied","Data":"00d940ff891b6d31b8981e01a86592ecfae7fb23e2e69d93d48fd3e2223b31d7"} Feb 16 13:29:54 crc kubenswrapper[4740]: I0216 13:29:54.722017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerStarted","Data":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} Feb 16 13:29:54 crc kubenswrapper[4740]: I0216 13:29:54.747055 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brjxc" podStartSLOduration=2.103185228 podStartE2EDuration="4.747034231s" podCreationTimestamp="2026-02-16 13:29:50 +0000 UTC" firstStartedPulling="2026-02-16 13:29:51.685584578 +0000 UTC m=+2219.061933309" lastFinishedPulling="2026-02-16 13:29:54.329433591 +0000 UTC m=+2221.705782312" observedRunningTime="2026-02-16 13:29:54.742349184 +0000 UTC m=+2222.118697905" watchObservedRunningTime="2026-02-16 13:29:54.747034231 +0000 UTC m=+2222.123382952" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.159428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264591 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264683 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.264904 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") pod \"2ab3e576-ab98-496c-a189-2e79796f9e98\" (UID: \"2ab3e576-ab98-496c-a189-2e79796f9e98\") " Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.271651 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.271740 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4" (OuterVolumeSpecName: "kube-api-access-8ctr4") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "kube-api-access-8ctr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.295128 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.302901 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.305137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory" (OuterVolumeSpecName: "inventory") pod "2ab3e576-ab98-496c-a189-2e79796f9e98" (UID: "2ab3e576-ab98-496c-a189-2e79796f9e98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369893 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369936 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369957 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ctr4\" (UniqueName: \"kubernetes.io/projected/2ab3e576-ab98-496c-a189-2e79796f9e98-kube-api-access-8ctr4\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.369978 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.370013 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab3e576-ab98-496c-a189-2e79796f9e98-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732846 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjh65" event={"ID":"2ab3e576-ab98-496c-a189-2e79796f9e98","Type":"ContainerDied","Data":"6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7"} Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.732977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6db99d2d0c14957d4ebcbcf2fbd6d0763973e16c812983294963b64228105bf7" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.854408 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:55 crc kubenswrapper[4740]: E0216 13:29:55.855193 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.855223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.855504 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab3e576-ab98-496c-a189-2e79796f9e98" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.856382 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858642 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858795 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.858643 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.859987 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.860235 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.869565 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.891381 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.986947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987316 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:55 crc kubenswrapper[4740]: I0216 13:29:55.987550 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.088976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089049 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089067 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.089154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.090146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.094788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.095513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.095546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.096053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.096535 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.105708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106175 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.106573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.109055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lhwdj\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.180551 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.741711 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj"] Feb 16 13:29:56 crc kubenswrapper[4740]: I0216 13:29:56.744575 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerStarted","Data":"a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d"} Feb 16 13:29:57 crc kubenswrapper[4740]: I0216 13:29:57.755017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerStarted","Data":"f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b"} Feb 16 13:29:57 crc kubenswrapper[4740]: I0216 13:29:57.783545 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" podStartSLOduration=2.1966779770000002 podStartE2EDuration="2.783481094s" podCreationTimestamp="2026-02-16 13:29:55 +0000 UTC" firstStartedPulling="2026-02-16 13:29:56.725880311 +0000 UTC m=+2224.102229072" lastFinishedPulling="2026-02-16 13:29:57.312683448 +0000 UTC m=+2224.689032189" observedRunningTime="2026-02-16 13:29:57.782311097 +0000 UTC m=+2225.158659858" watchObservedRunningTime="2026-02-16 13:29:57.783481094 +0000 UTC m=+2225.159829855" Feb 16 13:29:58 crc kubenswrapper[4740]: I0216 13:29:58.281474 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:29:58 crc kubenswrapper[4740]: E0216 13:29:58.281723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.140167 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.142046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.145774 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.154424 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.154719 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.286653 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.286768 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.287090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.389197 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.390037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.390338 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.391039 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.404525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.409786 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"collect-profiles-29520810-g76mr\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.470474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.726361 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.726694 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.776937 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.827592 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:00 crc kubenswrapper[4740]: W0216 13:30:00.925953 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode01f374e_7e34_4175_b300_1d1a5f95c85e.slice/crio-332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd WatchSource:0}: Error finding container 332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd: Status 404 returned error can't find the container with id 332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd Feb 16 13:30:00 crc kubenswrapper[4740]: I0216 13:30:00.936042 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr"] Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.016553 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.793979 4740 generic.go:334] "Generic (PLEG): container finished" podID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerID="b4252b006f9315f40457f664ad289bfcd573b8f4acc4b78060ada515a3efb79d" exitCode=0 Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.794104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerDied","Data":"b4252b006f9315f40457f664ad289bfcd573b8f4acc4b78060ada515a3efb79d"} Feb 16 13:30:01 crc kubenswrapper[4740]: I0216 13:30:01.794445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerStarted","Data":"332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd"} Feb 16 13:30:02 crc kubenswrapper[4740]: I0216 13:30:02.805394 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brjxc" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" containerID="cri-o://6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" gracePeriod=2 Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.178388 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.276377 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352202 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.352329 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") pod \"e01f374e-7e34-4175-b300-1d1a5f95c85e\" (UID: \"e01f374e-7e34-4175-b300-1d1a5f95c85e\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.353135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.357577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.358863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9" (OuterVolumeSpecName: "kube-api-access-49pq9") pod "e01f374e-7e34-4175-b300-1d1a5f95c85e" (UID: "e01f374e-7e34-4175-b300-1d1a5f95c85e"). InnerVolumeSpecName "kube-api-access-49pq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454235 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454458 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.454534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") pod \"fbcc12dc-03f3-4820-865b-e43d66da1be5\" (UID: \"fbcc12dc-03f3-4820-865b-e43d66da1be5\") " Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455009 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01f374e-7e34-4175-b300-1d1a5f95c85e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455034 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49pq9\" (UniqueName: \"kubernetes.io/projected/e01f374e-7e34-4175-b300-1d1a5f95c85e-kube-api-access-49pq9\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455044 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01f374e-7e34-4175-b300-1d1a5f95c85e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.455990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities" (OuterVolumeSpecName: "utilities") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.460121 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z" (OuterVolumeSpecName: "kube-api-access-2rp9z") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "kube-api-access-2rp9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.512373 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbcc12dc-03f3-4820-865b-e43d66da1be5" (UID: "fbcc12dc-03f3-4820-865b-e43d66da1be5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556785 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556833 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbcc12dc-03f3-4820-865b-e43d66da1be5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.556844 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rp9z\" (UniqueName: \"kubernetes.io/projected/fbcc12dc-03f3-4820-865b-e43d66da1be5-kube-api-access-2rp9z\") on node \"crc\" DevicePath \"\"" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817267 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-g76mr" event={"ID":"e01f374e-7e34-4175-b300-1d1a5f95c85e","Type":"ContainerDied","Data":"332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.817977 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332290f363dc16eb5634ae7f13b714610cc6b8add9526fd8a69a7facbdb2a5dd" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821188 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" exitCode=0 Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821255 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brjxc" event={"ID":"fbcc12dc-03f3-4820-865b-e43d66da1be5","Type":"ContainerDied","Data":"3bb3773605d31d7edb6a9738bbc759b67a01af96bc53a8aeae49b88045e7b813"} Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821295 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brjxc" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.821312 4740 scope.go:117] "RemoveContainer" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.868857 4740 scope.go:117] "RemoveContainer" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.874639 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.882769 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brjxc"] Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.904066 4740 scope.go:117] "RemoveContainer" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920074 4740 scope.go:117] "RemoveContainer" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920425 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": container with ID starting with 6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c not found: ID does not exist" containerID="6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920461 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c"} err="failed to get container status \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": rpc error: code = NotFound desc = could not find container \"6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c\": container with ID starting with 6638afeff720c5c889b4c5e8f6d5824a788884ce95f7ad8f6dd3315faa83bf0c not found: ID does not exist" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920483 4740 scope.go:117] "RemoveContainer" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920697 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": container with ID starting with f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82 not found: ID does not exist" containerID="f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920717 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82"} err="failed to get container status \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": rpc error: code = NotFound desc = could not find container \"f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82\": container with ID starting with f8201861236d50cb73a8fe3b76d186132f0f2c672a062a9a9e3324b38c395f82 not found: ID does not exist" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.920730 4740 scope.go:117] "RemoveContainer" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: E0216 13:30:03.920985 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": container with ID starting with ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4 not found: ID does not exist" containerID="ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4" Feb 16 13:30:03 crc kubenswrapper[4740]: I0216 13:30:03.921006 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4"} err="failed to get container status \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": rpc error: code = NotFound desc = could not find container \"ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4\": container with ID starting with ef2ed67191fb131242d25fd1bfb815bf811dbd7dfb5f7c998bc79e96bbc368c4 not found: ID does not exist" Feb 16 13:30:04 crc kubenswrapper[4740]: I0216 13:30:04.258657 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 13:30:04 crc kubenswrapper[4740]: I0216 13:30:04.269269 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520765-mt89v"] Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.294498 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9062ffdd-baa5-4ebc-8f40-353fac0e821e" path="/var/lib/kubelet/pods/9062ffdd-baa5-4ebc-8f40-353fac0e821e/volumes" Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.296055 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" path="/var/lib/kubelet/pods/fbcc12dc-03f3-4820-865b-e43d66da1be5/volumes" Feb 16 13:30:05 crc kubenswrapper[4740]: I0216 13:30:05.965449 4740 scope.go:117] "RemoveContainer" containerID="907512b6409722d30c16b894b8c9741e98ad5cd769eb1a4db429190f1ce78cae" Feb 16 13:30:11 crc kubenswrapper[4740]: I0216 13:30:11.281475 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:11 crc kubenswrapper[4740]: E0216 13:30:11.282292 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:23 crc kubenswrapper[4740]: I0216 13:30:23.287079 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:23 crc kubenswrapper[4740]: E0216 13:30:23.287801 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:36 crc kubenswrapper[4740]: I0216 13:30:36.282267 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:36 crc kubenswrapper[4740]: E0216 13:30:36.283278 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:30:50 crc kubenswrapper[4740]: I0216 13:30:50.287836 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:30:50 crc kubenswrapper[4740]: E0216 13:30:50.289141 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:05 crc kubenswrapper[4740]: I0216 13:31:05.281875 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:05 crc kubenswrapper[4740]: E0216 13:31:05.282853 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:17 crc kubenswrapper[4740]: I0216 13:31:17.281697 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:17 crc kubenswrapper[4740]: E0216 13:31:17.282547 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:31 crc kubenswrapper[4740]: I0216 13:31:31.281472 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:31 crc kubenswrapper[4740]: E0216 13:31:31.282565 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.183637 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184434 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-utilities" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184446 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-utilities" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184466 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184472 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184515 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-content" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184521 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="extract-content" Feb 16 13:31:35 crc kubenswrapper[4740]: E0216 13:31:35.184533 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184539 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184701 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01f374e-7e34-4175-b300-1d1a5f95c85e" containerName="collect-profiles" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.184718 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcc12dc-03f3-4820-865b-e43d66da1be5" containerName="registry-server" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.209388 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.217197 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382144 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382435 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.382640 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485563 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485657 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.485725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.486723 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.487283 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.505709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"redhat-marketplace-9gmt7\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:35 crc kubenswrapper[4740]: I0216 13:31:35.538745 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.005552 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710149 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" exitCode=0 Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835"} Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.710525 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"2dcbb0582701463a2dee9946fec569414499594c16cd1e7030fbd328e5d7fb94"} Feb 16 13:31:36 crc kubenswrapper[4740]: I0216 13:31:36.712908 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:31:37 crc kubenswrapper[4740]: I0216 13:31:37.724975 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} Feb 16 13:31:38 crc kubenswrapper[4740]: I0216 13:31:38.741121 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" exitCode=0 Feb 16 13:31:38 crc kubenswrapper[4740]: I0216 13:31:38.741219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} Feb 16 13:31:39 crc kubenswrapper[4740]: I0216 13:31:39.752990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerStarted","Data":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} Feb 16 13:31:39 crc kubenswrapper[4740]: I0216 13:31:39.784604 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9gmt7" podStartSLOduration=2.253947753 podStartE2EDuration="4.784579631s" podCreationTimestamp="2026-02-16 13:31:35 +0000 UTC" firstStartedPulling="2026-02-16 13:31:36.712398306 +0000 UTC m=+2324.088747047" lastFinishedPulling="2026-02-16 13:31:39.243030204 +0000 UTC m=+2326.619378925" observedRunningTime="2026-02-16 13:31:39.775069346 +0000 UTC m=+2327.151418077" watchObservedRunningTime="2026-02-16 13:31:39.784579631 +0000 UTC m=+2327.160928352" Feb 16 13:31:42 crc kubenswrapper[4740]: I0216 13:31:42.281546 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:42 crc kubenswrapper[4740]: E0216 13:31:42.282328 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.539912 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.540273 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.586468 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.853917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:45 crc kubenswrapper[4740]: I0216 13:31:45.901127 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:47 crc kubenswrapper[4740]: I0216 13:31:47.843180 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9gmt7" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" containerID="cri-o://2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" gracePeriod=2 Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.328442 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447364 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447446 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.447559 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") pod \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\" (UID: \"40f84c73-01b8-48f4-8bd7-30a4be00f6c5\") " Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.448998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities" (OuterVolumeSpecName: "utilities") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.454251 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k" (OuterVolumeSpecName: "kube-api-access-chm2k") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "kube-api-access-chm2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.470417 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40f84c73-01b8-48f4-8bd7-30a4be00f6c5" (UID: "40f84c73-01b8-48f4-8bd7-30a4be00f6c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550108 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chm2k\" (UniqueName: \"kubernetes.io/projected/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-kube-api-access-chm2k\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.550147 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f84c73-01b8-48f4-8bd7-30a4be00f6c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855879 4740 generic.go:334] "Generic (PLEG): container finished" podID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" exitCode=0 Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.855990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gmt7" event={"ID":"40f84c73-01b8-48f4-8bd7-30a4be00f6c5","Type":"ContainerDied","Data":"2dcbb0582701463a2dee9946fec569414499594c16cd1e7030fbd328e5d7fb94"} Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.856011 4740 scope.go:117] "RemoveContainer" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.856200 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gmt7" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.885244 4740 scope.go:117] "RemoveContainer" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.905927 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.915481 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gmt7"] Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.927982 4740 scope.go:117] "RemoveContainer" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.968644 4740 scope.go:117] "RemoveContainer" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.969549 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": container with ID starting with 2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa not found: ID does not exist" containerID="2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.969602 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa"} err="failed to get container status \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": rpc error: code = NotFound desc = could not find container \"2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa\": container with ID starting with 2d253e63a146bcc35eeb67d79a829cca6d81358786811ab8b709d56952b3a7fa not found: ID does not exist" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.969635 4740 scope.go:117] "RemoveContainer" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.970134 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": container with ID starting with f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a not found: ID does not exist" containerID="f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970157 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a"} err="failed to get container status \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": rpc error: code = NotFound desc = could not find container \"f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a\": container with ID starting with f2dee5d30f6da949bad7949f4e46cdc2af88a45dd49f7d0304e3674fa845455a not found: ID does not exist" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970171 4740 scope.go:117] "RemoveContainer" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: E0216 13:31:48.970425 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": container with ID starting with 3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835 not found: ID does not exist" containerID="3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835" Feb 16 13:31:48 crc kubenswrapper[4740]: I0216 13:31:48.970452 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835"} err="failed to get container status \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": rpc error: code = NotFound desc = could not find container \"3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835\": container with ID starting with 3d3f83b94314bf5a3e97fd4b6185a8b7a5e56ee47b058a283b712f4363b15835 not found: ID does not exist" Feb 16 13:31:49 crc kubenswrapper[4740]: I0216 13:31:49.300994 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" path="/var/lib/kubelet/pods/40f84c73-01b8-48f4-8bd7-30a4be00f6c5/volumes" Feb 16 13:31:54 crc kubenswrapper[4740]: I0216 13:31:54.281219 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:31:54 crc kubenswrapper[4740]: E0216 13:31:54.281927 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:07 crc kubenswrapper[4740]: I0216 13:32:07.281567 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:07 crc kubenswrapper[4740]: E0216 13:32:07.283015 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:15 crc kubenswrapper[4740]: I0216 13:32:15.371238 4740 generic.go:334] "Generic (PLEG): container finished" podID="58706e85-268c-4ce0-b1e4-82dd86872568" containerID="f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b" exitCode=0 Feb 16 13:32:15 crc kubenswrapper[4740]: I0216 13:32:15.372138 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerDied","Data":"f8dfccb248f53dd9fdb3059db9ec3a73dd1f026ee947f237e40616dbda324d1b"} Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.807740 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931885 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.931917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932568 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932607 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932630 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.932659 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") pod \"58706e85-268c-4ce0-b1e4-82dd86872568\" (UID: \"58706e85-268c-4ce0-b1e4-82dd86872568\") " Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.938069 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.945597 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf" (OuterVolumeSpecName: "kube-api-access-vvrcf") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "kube-api-access-vvrcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.962086 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.962923 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory" (OuterVolumeSpecName: "inventory") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.966868 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.969258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.972516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.980081 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.980439 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.984104 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:16 crc kubenswrapper[4740]: I0216 13:32:16.987127 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "58706e85-268c-4ce0-b1e4-82dd86872568" (UID: "58706e85-268c-4ce0-b1e4-82dd86872568"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.036964 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvrcf\" (UniqueName: \"kubernetes.io/projected/58706e85-268c-4ce0-b1e4-82dd86872568-kube-api-access-vvrcf\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037581 4740 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/58706e85-268c-4ce0-b1e4-82dd86872568-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037661 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037705 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037718 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037730 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037741 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037754 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037764 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037777 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.037787 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/58706e85-268c-4ce0-b1e4-82dd86872568-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392237 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" event={"ID":"58706e85-268c-4ce0-b1e4-82dd86872568","Type":"ContainerDied","Data":"a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d"} Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392296 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66651bb06a9c8ba05576de47c0da196fe76991428b790e9e660d8ab3c28e74d" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.392350 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lhwdj" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513250 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513715 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-content" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513735 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-content" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513770 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513779 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513793 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513802 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: E0216 13:32:17.513862 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-utilities" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.513871 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="extract-utilities" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.514127 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f84c73-01b8-48f4-8bd7-30a4be00f6c5" containerName="registry-server" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.514168 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="58706e85-268c-4ce0-b1e4-82dd86872568" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.515215 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.517419 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.517420 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519207 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519241 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.519623 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l25sb" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.521615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651374 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651748 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.651874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.753954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754072 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754144 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754258 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.754279 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.761533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.762207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.774211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99lsn\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:17 crc kubenswrapper[4740]: I0216 13:32:17.832642 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:32:18 crc kubenswrapper[4740]: I0216 13:32:18.375065 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn"] Feb 16 13:32:18 crc kubenswrapper[4740]: I0216 13:32:18.403431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerStarted","Data":"d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e"} Feb 16 13:32:19 crc kubenswrapper[4740]: I0216 13:32:19.433000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerStarted","Data":"ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957"} Feb 16 13:32:19 crc kubenswrapper[4740]: I0216 13:32:19.477939 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" podStartSLOduration=2.000476662 podStartE2EDuration="2.477917634s" podCreationTimestamp="2026-02-16 13:32:17 +0000 UTC" firstStartedPulling="2026-02-16 13:32:18.383651804 +0000 UTC m=+2365.760000525" lastFinishedPulling="2026-02-16 13:32:18.861092776 +0000 UTC m=+2366.237441497" observedRunningTime="2026-02-16 13:32:19.47522393 +0000 UTC m=+2366.851572661" watchObservedRunningTime="2026-02-16 13:32:19.477917634 +0000 UTC m=+2366.854266355" Feb 16 13:32:21 crc kubenswrapper[4740]: I0216 13:32:21.281650 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:21 crc kubenswrapper[4740]: E0216 13:32:21.282235 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:36 crc kubenswrapper[4740]: I0216 13:32:36.281683 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:36 crc kubenswrapper[4740]: E0216 13:32:36.282666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:48 crc kubenswrapper[4740]: I0216 13:32:48.281754 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:48 crc kubenswrapper[4740]: E0216 13:32:48.282735 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:32:59 crc kubenswrapper[4740]: I0216 13:32:59.281625 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:32:59 crc kubenswrapper[4740]: E0216 13:32:59.282712 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:12 crc kubenswrapper[4740]: I0216 13:33:12.281208 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:12 crc kubenswrapper[4740]: E0216 13:33:12.282063 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:27 crc kubenswrapper[4740]: I0216 13:33:27.281575 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:27 crc kubenswrapper[4740]: E0216 13:33:27.283454 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:38 crc kubenswrapper[4740]: I0216 13:33:38.282263 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:38 crc kubenswrapper[4740]: E0216 13:33:38.283479 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:33:51 crc kubenswrapper[4740]: I0216 13:33:51.280907 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:33:51 crc kubenswrapper[4740]: E0216 13:33:51.281683 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:04 crc kubenswrapper[4740]: I0216 13:34:04.281990 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:04 crc kubenswrapper[4740]: E0216 13:34:04.282820 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.210965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.213525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.233967 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376376 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376624 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.376733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478094 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.478494 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.479208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.479521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.498496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"redhat-operators-46rsw\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.537556 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:08 crc kubenswrapper[4740]: I0216 13:34:08.799877 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:08 crc kubenswrapper[4740]: W0216 13:34:08.806557 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a5b86e1_c137_4d7d_a184_e6f1ac9fa48b.slice/crio-1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85 WatchSource:0}: Error finding container 1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85: Status 404 returned error can't find the container with id 1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85 Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.411839 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4" exitCode=0 Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.412032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4"} Feb 16 13:34:09 crc kubenswrapper[4740]: I0216 13:34:09.412252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85"} Feb 16 13:34:10 crc kubenswrapper[4740]: I0216 13:34:10.422719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1"} Feb 16 13:34:11 crc kubenswrapper[4740]: I0216 13:34:11.437343 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1" exitCode=0 Feb 16 13:34:11 crc kubenswrapper[4740]: I0216 13:34:11.437418 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1"} Feb 16 13:34:12 crc kubenswrapper[4740]: I0216 13:34:12.447057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerStarted","Data":"bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09"} Feb 16 13:34:12 crc kubenswrapper[4740]: I0216 13:34:12.474265 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46rsw" podStartSLOduration=1.758728372 podStartE2EDuration="4.474244708s" podCreationTimestamp="2026-02-16 13:34:08 +0000 UTC" firstStartedPulling="2026-02-16 13:34:09.413878782 +0000 UTC m=+2476.790227503" lastFinishedPulling="2026-02-16 13:34:12.129395108 +0000 UTC m=+2479.505743839" observedRunningTime="2026-02-16 13:34:12.465489905 +0000 UTC m=+2479.841838626" watchObservedRunningTime="2026-02-16 13:34:12.474244708 +0000 UTC m=+2479.850593429" Feb 16 13:34:15 crc kubenswrapper[4740]: I0216 13:34:15.282839 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:15 crc kubenswrapper[4740]: E0216 13:34:15.283656 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.537920 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.538572 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:18 crc kubenswrapper[4740]: I0216 13:34:18.608341 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:19 crc kubenswrapper[4740]: I0216 13:34:19.552673 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:19 crc kubenswrapper[4740]: I0216 13:34:19.619155 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:21 crc kubenswrapper[4740]: I0216 13:34:21.529159 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46rsw" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" containerID="cri-o://bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" gracePeriod=2 Feb 16 13:34:22 crc kubenswrapper[4740]: I0216 13:34:22.541093 4740 generic.go:334] "Generic (PLEG): container finished" podID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerID="bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4740]: I0216 13:34:22.541152 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09"} Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.080154 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092134 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.092438 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") pod \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\" (UID: \"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b\") " Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.093760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities" (OuterVolumeSpecName: "utilities") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.104729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694" (OuterVolumeSpecName: "kube-api-access-mh694") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "kube-api-access-mh694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.195403 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.195445 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh694\" (UniqueName: \"kubernetes.io/projected/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-kube-api-access-mh694\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.212353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" (UID: "5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.296662 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46rsw" event={"ID":"5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b","Type":"ContainerDied","Data":"1a07c034600911d3211ce87e185ccc39f7b0a670f89a12468918fa704fcd8c85"} Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554935 4740 scope.go:117] "RemoveContainer" containerID="bda8b7c153354bbcad8f553408d5b661a0c739d96dc6afb34098c67b4b4bfc09" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.554960 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46rsw" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.585030 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.590896 4740 scope.go:117] "RemoveContainer" containerID="f5608e0423773aaf7e6dc91127106c2e30973759f4efb057227e9d642e7971a1" Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.598379 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46rsw"] Feb 16 13:34:23 crc kubenswrapper[4740]: I0216 13:34:23.628168 4740 scope.go:117] "RemoveContainer" containerID="91124add7238daaae875a4759d18573d98695401e71f59b7beb81d9365e7cdc4" Feb 16 13:34:25 crc kubenswrapper[4740]: I0216 13:34:25.293656 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" path="/var/lib/kubelet/pods/5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b/volumes" Feb 16 13:34:27 crc kubenswrapper[4740]: I0216 13:34:27.281056 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:27 crc kubenswrapper[4740]: E0216 13:34:27.281886 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:38 crc kubenswrapper[4740]: I0216 13:34:38.692283 4740 generic.go:334] "Generic (PLEG): container finished" podID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerID="ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957" exitCode=0 Feb 16 13:34:38 crc kubenswrapper[4740]: I0216 13:34:38.692375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerDied","Data":"ce0ef89403c661369cedba6260da0206089a73ac4d228d99a9048a25b07de957"} Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.131173 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249005 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249936 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.249975 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.250040 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") pod \"590a1858-7b00-48c8-a2b4-dae7b652ed89\" (UID: \"590a1858-7b00-48c8-a2b4-dae7b652ed89\") " Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.254796 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs" (OuterVolumeSpecName: "kube-api-access-mnlxs") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "kube-api-access-mnlxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.255409 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.280322 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory" (OuterVolumeSpecName: "inventory") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.282802 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.283249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.285050 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.291422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "590a1858-7b00-48c8-a2b4-dae7b652ed89" (UID: "590a1858-7b00-48c8-a2b4-dae7b652ed89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352271 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352306 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352319 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352331 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352348 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352362 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590a1858-7b00-48c8-a2b4-dae7b652ed89-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.352374 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlxs\" (UniqueName: \"kubernetes.io/projected/590a1858-7b00-48c8-a2b4-dae7b652ed89-kube-api-access-mnlxs\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" event={"ID":"590a1858-7b00-48c8-a2b4-dae7b652ed89","Type":"ContainerDied","Data":"d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e"} Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710985 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b2351e8245ae52c1c31435c7db70bced645a480174d49bcafb0bdc583bf46e" Feb 16 13:34:40 crc kubenswrapper[4740]: I0216 13:34:40.710871 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99lsn" Feb 16 13:34:42 crc kubenswrapper[4740]: I0216 13:34:42.282064 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:42 crc kubenswrapper[4740]: E0216 13:34:42.283125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:34:55 crc kubenswrapper[4740]: I0216 13:34:55.281551 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:34:55 crc kubenswrapper[4740]: I0216 13:34:55.907391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.683866 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.684969 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-utilities" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.684992 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-utilities" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685025 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-content" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685035 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="extract-content" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685057 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685067 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: E0216 13:35:38.685092 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685102 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685377 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5b86e1-c137-4d7d-a184-e6f1ac9fa48b" containerName="registry-server" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.685413 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="590a1858-7b00-48c8-a2b4-dae7b652ed89" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.688139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691651 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.691947 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.692267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mh4bs" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.695105 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.834876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835356 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835387 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.835920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.836114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.937912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938395 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938571 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.938859 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.939666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.939705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.945957 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.946188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.953958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.956014 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:38 crc kubenswrapper[4740]: I0216 13:35:38.967004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " pod="openstack/tempest-tests-tempest" Feb 16 13:35:39 crc kubenswrapper[4740]: I0216 13:35:39.023435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:35:39 crc kubenswrapper[4740]: I0216 13:35:39.463143 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 13:35:40 crc kubenswrapper[4740]: I0216 13:35:40.317402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerStarted","Data":"985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842"} Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.208221 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.208893 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzmdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(90aac50c-27a6-4ebd-b207-d3bc439dc1fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.210994 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" Feb 16 13:36:08 crc kubenswrapper[4740]: E0216 13:36:08.603769 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.291500 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.299319 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.315848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.390074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.391309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.391603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.493856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.493919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.494520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.513673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"certified-operators-hwtx2\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:10 crc kubenswrapper[4740]: I0216 13:36:10.642284 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:11 crc kubenswrapper[4740]: I0216 13:36:11.508900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:11 crc kubenswrapper[4740]: W0216 13:36:11.511308 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08bd798f_5a43_4738_9a77_e66a59468ba6.slice/crio-a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5 WatchSource:0}: Error finding container a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5: Status 404 returned error can't find the container with id a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5 Feb 16 13:36:11 crc kubenswrapper[4740]: I0216 13:36:11.624196 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5"} Feb 16 13:36:12 crc kubenswrapper[4740]: I0216 13:36:12.634638 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" exitCode=0 Feb 16 13:36:12 crc kubenswrapper[4740]: I0216 13:36:12.634711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35"} Feb 16 13:36:13 crc kubenswrapper[4740]: I0216 13:36:13.647472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} Feb 16 13:36:14 crc kubenswrapper[4740]: I0216 13:36:14.663233 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" exitCode=0 Feb 16 13:36:14 crc kubenswrapper[4740]: I0216 13:36:14.663326 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} Feb 16 13:36:15 crc kubenswrapper[4740]: I0216 13:36:15.672282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerStarted","Data":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} Feb 16 13:36:15 crc kubenswrapper[4740]: I0216 13:36:15.692670 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwtx2" podStartSLOduration=3.275788767 podStartE2EDuration="5.692639143s" podCreationTimestamp="2026-02-16 13:36:10 +0000 UTC" firstStartedPulling="2026-02-16 13:36:12.637935587 +0000 UTC m=+2600.014284308" lastFinishedPulling="2026-02-16 13:36:15.054785963 +0000 UTC m=+2602.431134684" observedRunningTime="2026-02-16 13:36:15.687349689 +0000 UTC m=+2603.063698460" watchObservedRunningTime="2026-02-16 13:36:15.692639143 +0000 UTC m=+2603.068987904" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.643678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.644319 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.685026 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.714558 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.780915 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:20 crc kubenswrapper[4740]: I0216 13:36:20.928228 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:21 crc kubenswrapper[4740]: I0216 13:36:21.753084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerStarted","Data":"026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1"} Feb 16 13:36:21 crc kubenswrapper[4740]: I0216 13:36:21.780101 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.540448053 podStartE2EDuration="44.780084181s" podCreationTimestamp="2026-02-16 13:35:37 +0000 UTC" firstStartedPulling="2026-02-16 13:35:39.471713712 +0000 UTC m=+2566.848062433" lastFinishedPulling="2026-02-16 13:36:20.71134984 +0000 UTC m=+2608.087698561" observedRunningTime="2026-02-16 13:36:21.775240141 +0000 UTC m=+2609.151588872" watchObservedRunningTime="2026-02-16 13:36:21.780084181 +0000 UTC m=+2609.156432902" Feb 16 13:36:22 crc kubenswrapper[4740]: I0216 13:36:22.760941 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hwtx2" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" containerID="cri-o://d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" gracePeriod=2 Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.290448 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353403 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.353595 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") pod \"08bd798f-5a43-4738-9a77-e66a59468ba6\" (UID: \"08bd798f-5a43-4738-9a77-e66a59468ba6\") " Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.354503 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities" (OuterVolumeSpecName: "utilities") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.359719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w" (OuterVolumeSpecName: "kube-api-access-nj49w") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "kube-api-access-nj49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.422634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08bd798f-5a43-4738-9a77-e66a59468ba6" (UID: "08bd798f-5a43-4738-9a77-e66a59468ba6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456268 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456303 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj49w\" (UniqueName: \"kubernetes.io/projected/08bd798f-5a43-4738-9a77-e66a59468ba6-kube-api-access-nj49w\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.456314 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bd798f-5a43-4738-9a77-e66a59468ba6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774108 4740 generic.go:334] "Generic (PLEG): container finished" podID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" exitCode=0 Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774153 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774182 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwtx2" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwtx2" event={"ID":"08bd798f-5a43-4738-9a77-e66a59468ba6","Type":"ContainerDied","Data":"a47449fe856852adc33c5d9024ab7aa9edb59bd881015e3a4518a4ec3b420ba5"} Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.774242 4740 scope.go:117] "RemoveContainer" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.814690 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.821536 4740 scope.go:117] "RemoveContainer" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.822805 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hwtx2"] Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.854212 4740 scope.go:117] "RemoveContainer" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905168 4740 scope.go:117] "RemoveContainer" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.905632 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": container with ID starting with d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a not found: ID does not exist" containerID="d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905694 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a"} err="failed to get container status \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": rpc error: code = NotFound desc = could not find container \"d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a\": container with ID starting with d7a7994f9bdbd91eab67a50dc4990e33e68964bd9b7618263136c9f281027c0a not found: ID does not exist" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.905731 4740 scope.go:117] "RemoveContainer" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.906122 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": container with ID starting with b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28 not found: ID does not exist" containerID="b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906152 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28"} err="failed to get container status \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": rpc error: code = NotFound desc = could not find container \"b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28\": container with ID starting with b639e040f40b29c1b6c7eb94b11f995fbddb40fbadebf786ace983b15a94db28 not found: ID does not exist" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906176 4740 scope.go:117] "RemoveContainer" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: E0216 13:36:23.906405 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": container with ID starting with af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35 not found: ID does not exist" containerID="af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35" Feb 16 13:36:23 crc kubenswrapper[4740]: I0216 13:36:23.906427 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35"} err="failed to get container status \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": rpc error: code = NotFound desc = could not find container \"af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35\": container with ID starting with af52885ffcfd59cf4aee060967ac9e92aab47c6e34565e74358d9e2f17a4fb35 not found: ID does not exist" Feb 16 13:36:25 crc kubenswrapper[4740]: I0216 13:36:25.290593 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" path="/var/lib/kubelet/pods/08bd798f-5a43-4738-9a77-e66a59468ba6/volumes" Feb 16 13:37:15 crc kubenswrapper[4740]: I0216 13:37:15.575257 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:37:15 crc kubenswrapper[4740]: I0216 13:37:15.575867 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:37:45 crc kubenswrapper[4740]: I0216 13:37:45.575153 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:37:45 crc kubenswrapper[4740]: I0216 13:37:45.576047 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.574758 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.575368 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.575413 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.576062 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.576119 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" gracePeriod=600 Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858659 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" exitCode=0 Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410"} Feb 16 13:38:15 crc kubenswrapper[4740]: I0216 13:38:15.858748 4740 scope.go:117] "RemoveContainer" containerID="7dbb67aa884122a3e4d505d394c3ef5cfea24fdb0c22528a235818d29b87f5ea" Feb 16 13:38:16 crc kubenswrapper[4740]: I0216 13:38:16.869064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} Feb 16 13:40:15 crc kubenswrapper[4740]: I0216 13:40:15.574939 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:40:15 crc kubenswrapper[4740]: I0216 13:40:15.575553 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.415691 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416922 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-utilities" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.416943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-utilities" Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416975 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.416984 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: E0216 13:40:30.416997 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-content" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.417005 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="extract-content" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.417229 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bd798f-5a43-4738-9a77-e66a59468ba6" containerName="registry-server" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.427271 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.435954 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.601662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.602020 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.602070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.703548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.704129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.704137 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.726856 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"community-operators-q592g\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:30 crc kubenswrapper[4740]: I0216 13:40:30.760744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:31 crc kubenswrapper[4740]: I0216 13:40:31.242864 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114509 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" exitCode=0 Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63"} Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.114880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"fe3dc1554969873d17f2cb07996884b9f394eeb1f06f36890aaf0b0e3ffbc114"} Feb 16 13:40:32 crc kubenswrapper[4740]: I0216 13:40:32.117325 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:40:34 crc kubenswrapper[4740]: I0216 13:40:34.132572 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} Feb 16 13:40:37 crc kubenswrapper[4740]: I0216 13:40:37.161490 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" exitCode=0 Feb 16 13:40:37 crc kubenswrapper[4740]: I0216 13:40:37.161593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} Feb 16 13:40:38 crc kubenswrapper[4740]: I0216 13:40:38.176173 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerStarted","Data":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} Feb 16 13:40:38 crc kubenswrapper[4740]: I0216 13:40:38.205323 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q592g" podStartSLOduration=2.742681792 podStartE2EDuration="8.205301444s" podCreationTimestamp="2026-02-16 13:40:30 +0000 UTC" firstStartedPulling="2026-02-16 13:40:32.117034999 +0000 UTC m=+2859.493383720" lastFinishedPulling="2026-02-16 13:40:37.579654651 +0000 UTC m=+2864.956003372" observedRunningTime="2026-02-16 13:40:38.198595555 +0000 UTC m=+2865.574944306" watchObservedRunningTime="2026-02-16 13:40:38.205301444 +0000 UTC m=+2865.581650165" Feb 16 13:40:40 crc kubenswrapper[4740]: I0216 13:40:40.761826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:40 crc kubenswrapper[4740]: I0216 13:40:40.762198 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:41 crc kubenswrapper[4740]: I0216 13:40:41.805080 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q592g" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" probeResult="failure" output=< Feb 16 13:40:41 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:40:41 crc kubenswrapper[4740]: > Feb 16 13:40:45 crc kubenswrapper[4740]: I0216 13:40:45.575126 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:40:45 crc kubenswrapper[4740]: I0216 13:40:45.576139 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:40:50 crc kubenswrapper[4740]: I0216 13:40:50.803605 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:50 crc kubenswrapper[4740]: I0216 13:40:50.850174 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:51 crc kubenswrapper[4740]: I0216 13:40:51.041543 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.315964 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q592g" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" containerID="cri-o://11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" gracePeriod=2 Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.800015 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.875486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.883190 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.883247 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") pod \"d68afe4b-5647-465b-b601-f16548640dcd\" (UID: \"d68afe4b-5647-465b-b601-f16548640dcd\") " Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.885422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities" (OuterVolumeSpecName: "utilities") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.896692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s" (OuterVolumeSpecName: "kube-api-access-tmc6s") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "kube-api-access-tmc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.926577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d68afe4b-5647-465b-b601-f16548640dcd" (UID: "d68afe4b-5647-465b-b601-f16548640dcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985896 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985931 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68afe4b-5647-465b-b601-f16548640dcd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:52 crc kubenswrapper[4740]: I0216 13:40:52.985943 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmc6s\" (UniqueName: \"kubernetes.io/projected/d68afe4b-5647-465b-b601-f16548640dcd-kube-api-access-tmc6s\") on node \"crc\" DevicePath \"\"" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329424 4740 generic.go:334] "Generic (PLEG): container finished" podID="d68afe4b-5647-465b-b601-f16548640dcd" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" exitCode=0 Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q592g" event={"ID":"d68afe4b-5647-465b-b601-f16548640dcd","Type":"ContainerDied","Data":"fe3dc1554969873d17f2cb07996884b9f394eeb1f06f36890aaf0b0e3ffbc114"} Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329508 4740 scope.go:117] "RemoveContainer" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.329634 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q592g" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.361651 4740 scope.go:117] "RemoveContainer" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.372005 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.382962 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q592g"] Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.388243 4740 scope.go:117] "RemoveContainer" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.431918 4740 scope.go:117] "RemoveContainer" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.432365 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": container with ID starting with 11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1 not found: ID does not exist" containerID="11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.432415 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1"} err="failed to get container status \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": rpc error: code = NotFound desc = could not find container \"11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1\": container with ID starting with 11ed6c3fb3675dd8f0f39bd47afd945b06c04dc499e50123d6582f827e2c11e1 not found: ID does not exist" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.432452 4740 scope.go:117] "RemoveContainer" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.432964 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": container with ID starting with b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f not found: ID does not exist" containerID="b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433013 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f"} err="failed to get container status \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": rpc error: code = NotFound desc = could not find container \"b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f\": container with ID starting with b4c68167edef1dbeee1bf12660bc5f1cea93abb2a8351eb36f425e659f60849f not found: ID does not exist" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433041 4740 scope.go:117] "RemoveContainer" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: E0216 13:40:53.433399 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": container with ID starting with d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63 not found: ID does not exist" containerID="d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63" Feb 16 13:40:53 crc kubenswrapper[4740]: I0216 13:40:53.433435 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63"} err="failed to get container status \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": rpc error: code = NotFound desc = could not find container \"d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63\": container with ID starting with d21619d856750a8fc44dcd473f13d3eabe79e30e105d5bb8ce4e8df4bbe5ec63 not found: ID does not exist" Feb 16 13:40:55 crc kubenswrapper[4740]: I0216 13:40:55.292166 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68afe4b-5647-465b-b601-f16548640dcd" path="/var/lib/kubelet/pods/d68afe4b-5647-465b-b601-f16548640dcd/volumes" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.574679 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.575352 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.575429 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.578598 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:41:15 crc kubenswrapper[4740]: I0216 13:41:15.578735 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" gracePeriod=600 Feb 16 13:41:15 crc kubenswrapper[4740]: E0216 13:41:15.705790 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548135 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" exitCode=0 Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548219 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0"} Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.548288 4740 scope.go:117] "RemoveContainer" containerID="6a135d934e1315fc7c0cdb5301873c3696508a8c9f2ac09f9fe5672fa265b410" Feb 16 13:41:16 crc kubenswrapper[4740]: I0216 13:41:16.549109 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:16 crc kubenswrapper[4740]: E0216 13:41:16.549571 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:29 crc kubenswrapper[4740]: I0216 13:41:29.282177 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:29 crc kubenswrapper[4740]: E0216 13:41:29.285167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:43 crc kubenswrapper[4740]: I0216 13:41:43.292124 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:43 crc kubenswrapper[4740]: E0216 13:41:43.292876 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:41:54 crc kubenswrapper[4740]: I0216 13:41:54.281139 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:41:54 crc kubenswrapper[4740]: E0216 13:41:54.281924 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:08 crc kubenswrapper[4740]: I0216 13:42:08.281182 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:08 crc kubenswrapper[4740]: E0216 13:42:08.282100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.905779 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906563 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906579 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906610 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-utilities" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906619 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-utilities" Feb 16 13:42:10 crc kubenswrapper[4740]: E0216 13:42:10.906627 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-content" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906636 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="extract-content" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.906894 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68afe4b-5647-465b-b601-f16548640dcd" containerName="registry-server" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.909002 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.926689 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:10 crc kubenswrapper[4740]: I0216 13:42:10.972826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.078432 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.079464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.079641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.109423 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"redhat-marketplace-zgzdd\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.239001 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:11 crc kubenswrapper[4740]: I0216 13:42:11.685855 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109423 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" exitCode=0 Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599"} Feb 16 13:42:12 crc kubenswrapper[4740]: I0216 13:42:12.109709 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerStarted","Data":"e23c166928f79bac061c06865364209cfe408791af5880f73140fc249f485603"} Feb 16 13:42:13 crc kubenswrapper[4740]: I0216 13:42:13.119098 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" exitCode=0 Feb 16 13:42:13 crc kubenswrapper[4740]: I0216 13:42:13.119168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3"} Feb 16 13:42:14 crc kubenswrapper[4740]: I0216 13:42:14.131170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerStarted","Data":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} Feb 16 13:42:14 crc kubenswrapper[4740]: I0216 13:42:14.153959 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgzdd" podStartSLOduration=2.728885505 podStartE2EDuration="4.153936399s" podCreationTimestamp="2026-02-16 13:42:10 +0000 UTC" firstStartedPulling="2026-02-16 13:42:12.111067356 +0000 UTC m=+2959.487416077" lastFinishedPulling="2026-02-16 13:42:13.53611824 +0000 UTC m=+2960.912466971" observedRunningTime="2026-02-16 13:42:14.148988235 +0000 UTC m=+2961.525336996" watchObservedRunningTime="2026-02-16 13:42:14.153936399 +0000 UTC m=+2961.530285130" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.239446 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.240979 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:21 crc kubenswrapper[4740]: I0216 13:42:21.291735 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.247451 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.282089 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:22 crc kubenswrapper[4740]: E0216 13:42:22.282515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:22 crc kubenswrapper[4740]: I0216 13:42:22.299536 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.229078 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgzdd" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" containerID="cri-o://c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" gracePeriod=2 Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.707120 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.870786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.870917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.871072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") pod \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\" (UID: \"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab\") " Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.871862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities" (OuterVolumeSpecName: "utilities") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.876035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n" (OuterVolumeSpecName: "kube-api-access-49z5n") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "kube-api-access-49z5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.893767 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" (UID: "3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973801 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973956 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49z5n\" (UniqueName: \"kubernetes.io/projected/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-kube-api-access-49z5n\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:24 crc kubenswrapper[4740]: I0216 13:42:24.973973 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240791 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" exitCode=0 Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240894 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgzdd" event={"ID":"3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab","Type":"ContainerDied","Data":"e23c166928f79bac061c06865364209cfe408791af5880f73140fc249f485603"} Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240916 4740 scope.go:117] "RemoveContainer" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.240959 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgzdd" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.270825 4740 scope.go:117] "RemoveContainer" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.300383 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.300960 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgzdd"] Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.309369 4740 scope.go:117] "RemoveContainer" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.356218 4740 scope.go:117] "RemoveContainer" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357043 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": container with ID starting with c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c not found: ID does not exist" containerID="c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357123 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c"} err="failed to get container status \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": rpc error: code = NotFound desc = could not find container \"c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c\": container with ID starting with c0ca0dfcc6dd049c377ed7d9782d78aeb98759960300c3475781d0abd5d8c64c not found: ID does not exist" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357158 4740 scope.go:117] "RemoveContainer" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357676 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": container with ID starting with 583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3 not found: ID does not exist" containerID="583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357709 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3"} err="failed to get container status \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": rpc error: code = NotFound desc = could not find container \"583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3\": container with ID starting with 583fa187ba16bd91ad00db884d86c71c36346372b99aeb1346cda48add15c5e3 not found: ID does not exist" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.357731 4740 scope.go:117] "RemoveContainer" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: E0216 13:42:25.357972 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": container with ID starting with da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599 not found: ID does not exist" containerID="da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599" Feb 16 13:42:25 crc kubenswrapper[4740]: I0216 13:42:25.358006 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599"} err="failed to get container status \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": rpc error: code = NotFound desc = could not find container \"da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599\": container with ID starting with da6a6661448cbabf925ced9e0f7806f9947b23c1338a4088d09b650a507da599 not found: ID does not exist" Feb 16 13:42:27 crc kubenswrapper[4740]: I0216 13:42:27.302104 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" path="/var/lib/kubelet/pods/3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab/volumes" Feb 16 13:42:36 crc kubenswrapper[4740]: I0216 13:42:36.281772 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:36 crc kubenswrapper[4740]: E0216 13:42:36.282772 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:42:49 crc kubenswrapper[4740]: I0216 13:42:49.281881 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:42:49 crc kubenswrapper[4740]: E0216 13:42:49.282729 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:02 crc kubenswrapper[4740]: I0216 13:43:02.282583 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:02 crc kubenswrapper[4740]: E0216 13:43:02.283540 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:14 crc kubenswrapper[4740]: I0216 13:43:14.281374 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:14 crc kubenswrapper[4740]: E0216 13:43:14.282179 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:26 crc kubenswrapper[4740]: I0216 13:43:26.281837 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:26 crc kubenswrapper[4740]: E0216 13:43:26.282926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:38 crc kubenswrapper[4740]: I0216 13:43:38.281198 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:38 crc kubenswrapper[4740]: E0216 13:43:38.282078 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:43:49 crc kubenswrapper[4740]: I0216 13:43:49.281894 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:43:49 crc kubenswrapper[4740]: E0216 13:43:49.282701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:03 crc kubenswrapper[4740]: I0216 13:44:03.289770 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:03 crc kubenswrapper[4740]: E0216 13:44:03.290621 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:14 crc kubenswrapper[4740]: I0216 13:44:14.281521 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:14 crc kubenswrapper[4740]: E0216 13:44:14.282395 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:29 crc kubenswrapper[4740]: I0216 13:44:29.281344 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:29 crc kubenswrapper[4740]: E0216 13:44:29.282431 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:43 crc kubenswrapper[4740]: I0216 13:44:43.289390 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:43 crc kubenswrapper[4740]: E0216 13:44:43.290177 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:44:58 crc kubenswrapper[4740]: I0216 13:44:58.282481 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:44:58 crc kubenswrapper[4740]: E0216 13:44:58.283747 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.157516 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158322 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158337 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-content" Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158356 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158364 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: E0216 13:45:00.158386 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158394 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="extract-utilities" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.158631 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b57b8ed-ca0e-4c8e-bdf4-ae853c8012ab" containerName="registry-server" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.159401 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.161792 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.162097 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.166224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308605 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.308663 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.409967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.410070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.410178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.412142 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.432349 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.436292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"collect-profiles-29520825-dtngc\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.534226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:00 crc kubenswrapper[4740]: I0216 13:45:00.981909 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc"] Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693148 4740 generic.go:334] "Generic (PLEG): container finished" podID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerID="1c71581268eb82f32a7fdeaef37e4f1f954561bbb4a5fc80b6dcbfa17823b9f2" exitCode=0 Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerDied","Data":"1c71581268eb82f32a7fdeaef37e4f1f954561bbb4a5fc80b6dcbfa17823b9f2"} Feb 16 13:45:01 crc kubenswrapper[4740]: I0216 13:45:01.693966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerStarted","Data":"5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe"} Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.072185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.170939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.171026 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.171234 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") pod \"4a238536-d171-4c4b-9520-c2bb6ab8931c\" (UID: \"4a238536-d171-4c4b-9520-c2bb6ab8931c\") " Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.172100 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.176306 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.176916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv" (OuterVolumeSpecName: "kube-api-access-n79bv") pod "4a238536-d171-4c4b-9520-c2bb6ab8931c" (UID: "4a238536-d171-4c4b-9520-c2bb6ab8931c"). InnerVolumeSpecName "kube-api-access-n79bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273791 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a238536-d171-4c4b-9520-c2bb6ab8931c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273857 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a238536-d171-4c4b-9520-c2bb6ab8931c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.273872 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n79bv\" (UniqueName: \"kubernetes.io/projected/4a238536-d171-4c4b-9520-c2bb6ab8931c-kube-api-access-n79bv\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" event={"ID":"4a238536-d171-4c4b-9520-c2bb6ab8931c","Type":"ContainerDied","Data":"5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe"} Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709630 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0f734a4dd3a4289578516d4c5c36a09affd96cb09f31afc6c43c08bfa85fbe" Feb 16 13:45:03 crc kubenswrapper[4740]: I0216 13:45:03.709270 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-dtngc" Feb 16 13:45:04 crc kubenswrapper[4740]: I0216 13:45:04.157035 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:45:04 crc kubenswrapper[4740]: I0216 13:45:04.166885 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520780-wmq9r"] Feb 16 13:45:05 crc kubenswrapper[4740]: I0216 13:45:05.292093 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb" path="/var/lib/kubelet/pods/1da78c1d-39ab-4b8e-ad97-f3c62f4db3cb/volumes" Feb 16 13:45:06 crc kubenswrapper[4740]: I0216 13:45:06.383620 4740 scope.go:117] "RemoveContainer" containerID="db29968995b45d1f7cc2cd53a227253b37be7ae972a329e7a6e867128e553405" Feb 16 13:45:10 crc kubenswrapper[4740]: I0216 13:45:10.282036 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:10 crc kubenswrapper[4740]: E0216 13:45:10.283254 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:21 crc kubenswrapper[4740]: I0216 13:45:21.282056 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:21 crc kubenswrapper[4740]: E0216 13:45:21.283032 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:36 crc kubenswrapper[4740]: I0216 13:45:36.281151 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:36 crc kubenswrapper[4740]: E0216 13:45:36.281877 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:45:48 crc kubenswrapper[4740]: I0216 13:45:48.281562 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:45:48 crc kubenswrapper[4740]: E0216 13:45:48.283595 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:00 crc kubenswrapper[4740]: I0216 13:46:00.281947 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:00 crc kubenswrapper[4740]: E0216 13:46:00.283409 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:12 crc kubenswrapper[4740]: I0216 13:46:12.282230 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:12 crc kubenswrapper[4740]: E0216 13:46:12.283471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:46:19 crc kubenswrapper[4740]: I0216 13:46:19.349582 4740 generic.go:334] "Generic (PLEG): container finished" podID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerID="026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1" exitCode=0 Feb 16 13:46:19 crc kubenswrapper[4740]: I0216 13:46:19.351947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerDied","Data":"026326a984d26a260e5f7dd20f7d5284ba1cee86ee7c080001b48ba2acec81a1"} Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.773888 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.946686 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947178 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947844 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.947991 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948313 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948243 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data" (OuterVolumeSpecName: "config-data") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.948588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") pod \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\" (UID: \"90aac50c-27a6-4ebd-b207-d3bc439dc1fe\") " Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.949113 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.949219 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.952479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt" (OuterVolumeSpecName: "kube-api-access-xzmdt") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "kube-api-access-xzmdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.955060 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.961434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.973492 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.982370 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:20 crc kubenswrapper[4740]: I0216 13:46:20.986496 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.002367 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "90aac50c-27a6-4ebd-b207-d3bc439dc1fe" (UID: "90aac50c-27a6-4ebd-b207-d3bc439dc1fe"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051193 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzmdt\" (UniqueName: \"kubernetes.io/projected/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-kube-api-access-xzmdt\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051230 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051242 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051255 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051268 4740 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051305 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.051318 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90aac50c-27a6-4ebd-b207-d3bc439dc1fe-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.079609 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.152869 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"90aac50c-27a6-4ebd-b207-d3bc439dc1fe","Type":"ContainerDied","Data":"985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842"} Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370663 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985883f65375a0c0cb1b9ea3b01ad81f1f9f0b11972d1aff7e3292172bada842" Feb 16 13:46:21 crc kubenswrapper[4740]: I0216 13:46:21.370409 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.983918 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:24 crc kubenswrapper[4740]: E0216 13:46:24.984865 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.984880 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: E0216 13:46:24.984911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.984921 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985097 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a238536-d171-4c4b-9520-c2bb6ab8931c" containerName="collect-profiles" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985112 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="90aac50c-27a6-4ebd-b207-d3bc439dc1fe" containerName="tempest-tests-tempest-tests-runner" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.985744 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:24 crc kubenswrapper[4740]: I0216 13:46:24.987346 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mh4bs" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.001478 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.150917 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.151032 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253341 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.253860 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.277865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589gr\" (UniqueName: \"kubernetes.io/projected/4a270185-f419-49b5-aa81-b6d254269d2d-kube-api-access-589gr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.286211 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.287957 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4a270185-f419-49b5-aa81-b6d254269d2d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.306309 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.808589 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:46:25 crc kubenswrapper[4740]: I0216 13:46:25.810871 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 13:46:26 crc kubenswrapper[4740]: I0216 13:46:26.415019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} Feb 16 13:46:26 crc kubenswrapper[4740]: I0216 13:46:26.418256 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4a270185-f419-49b5-aa81-b6d254269d2d","Type":"ContainerStarted","Data":"3d5443898f4defc2be87624d016591d90cdbec539e1a8a90123b543107b5b099"} Feb 16 13:46:27 crc kubenswrapper[4740]: I0216 13:46:27.428252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4a270185-f419-49b5-aa81-b6d254269d2d","Type":"ContainerStarted","Data":"7f0b8d0e04744f9d567adf866192349134d3a900c10908a00883a27662a0346a"} Feb 16 13:46:27 crc kubenswrapper[4740]: I0216 13:46:27.449573 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.328801988 podStartE2EDuration="3.449553022s" podCreationTimestamp="2026-02-16 13:46:24 +0000 UTC" firstStartedPulling="2026-02-16 13:46:25.808359739 +0000 UTC m=+3213.184708460" lastFinishedPulling="2026-02-16 13:46:26.929110773 +0000 UTC m=+3214.305459494" observedRunningTime="2026-02-16 13:46:27.442266364 +0000 UTC m=+3214.818615095" watchObservedRunningTime="2026-02-16 13:46:27.449553022 +0000 UTC m=+3214.825901743" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.865237 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.880959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.881663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885222 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8952r"/"openshift-service-ca.crt" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885375 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8952r"/"default-dockercfg-qxw8g" Feb 16 13:46:48 crc kubenswrapper[4740]: I0216 13:46:48.885492 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8952r"/"kube-root-ca.crt" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.039661 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.040453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.142670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.142849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.143261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.171235 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"must-gather-5m4h9\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.208870 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:46:49 crc kubenswrapper[4740]: I0216 13:46:49.818753 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:46:50 crc kubenswrapper[4740]: I0216 13:46:50.637797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"42871b5417d626a3836de593705b9a20d4dadb26b92dee49e33620324c162cd3"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.690102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.690605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerStarted","Data":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} Feb 16 13:46:56 crc kubenswrapper[4740]: I0216 13:46:56.705420 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8952r/must-gather-5m4h9" podStartSLOduration=2.865555875 podStartE2EDuration="8.705399715s" podCreationTimestamp="2026-02-16 13:46:48 +0000 UTC" firstStartedPulling="2026-02-16 13:46:49.827415985 +0000 UTC m=+3237.203764706" lastFinishedPulling="2026-02-16 13:46:55.667259825 +0000 UTC m=+3243.043608546" observedRunningTime="2026-02-16 13:46:56.703707702 +0000 UTC m=+3244.080056423" watchObservedRunningTime="2026-02-16 13:46:56.705399715 +0000 UTC m=+3244.081748436" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.403001 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.405114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.440099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.440276 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542219 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.542376 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.564985 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"crc-debug-468db\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: I0216 13:46:59.722685 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:46:59 crc kubenswrapper[4740]: W0216 13:46:59.767288 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacde68ab_8b2c_4d56_9743_27d87e4829d5.slice/crio-d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a WatchSource:0}: Error finding container d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a: Status 404 returned error can't find the container with id d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a Feb 16 13:47:00 crc kubenswrapper[4740]: I0216 13:47:00.736534 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerStarted","Data":"d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a"} Feb 16 13:47:11 crc kubenswrapper[4740]: I0216 13:47:11.835491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerStarted","Data":"63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f"} Feb 16 13:47:11 crc kubenswrapper[4740]: I0216 13:47:11.867108 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8952r/crc-debug-468db" podStartSLOduration=1.213017839 podStartE2EDuration="12.867088302s" podCreationTimestamp="2026-02-16 13:46:59 +0000 UTC" firstStartedPulling="2026-02-16 13:46:59.77471058 +0000 UTC m=+3247.151059311" lastFinishedPulling="2026-02-16 13:47:11.428781053 +0000 UTC m=+3258.805129774" observedRunningTime="2026-02-16 13:47:11.85935286 +0000 UTC m=+3259.235701591" watchObservedRunningTime="2026-02-16 13:47:11.867088302 +0000 UTC m=+3259.243437023" Feb 16 13:47:52 crc kubenswrapper[4740]: I0216 13:47:52.404594 4740 generic.go:334] "Generic (PLEG): container finished" podID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerID="63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f" exitCode=0 Feb 16 13:47:52 crc kubenswrapper[4740]: I0216 13:47:52.404686 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-468db" event={"ID":"acde68ab-8b2c-4d56-9743-27d87e4829d5","Type":"ContainerDied","Data":"63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f"} Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.523635 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.551279 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.560786 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-468db"] Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.668954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") pod \"acde68ab-8b2c-4d56-9743-27d87e4829d5\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.669583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") pod \"acde68ab-8b2c-4d56-9743-27d87e4829d5\" (UID: \"acde68ab-8b2c-4d56-9743-27d87e4829d5\") " Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.669726 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host" (OuterVolumeSpecName: "host") pod "acde68ab-8b2c-4d56-9743-27d87e4829d5" (UID: "acde68ab-8b2c-4d56-9743-27d87e4829d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.670214 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/acde68ab-8b2c-4d56-9743-27d87e4829d5-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.676743 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts" (OuterVolumeSpecName: "kube-api-access-td7ts") pod "acde68ab-8b2c-4d56-9743-27d87e4829d5" (UID: "acde68ab-8b2c-4d56-9743-27d87e4829d5"). InnerVolumeSpecName "kube-api-access-td7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:53 crc kubenswrapper[4740]: I0216 13:47:53.772714 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7ts\" (UniqueName: \"kubernetes.io/projected/acde68ab-8b2c-4d56-9743-27d87e4829d5-kube-api-access-td7ts\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.422592 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5680f00a6c5532a4a106cd99dd82ddb65bf357b566d040f81e165ed3f4e9b7a" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.422644 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-468db" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.745454 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:54 crc kubenswrapper[4740]: E0216 13:47:54.745983 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.745999 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.746207 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" containerName="container-00" Feb 16 13:47:54 crc kubenswrapper[4740]: I0216 13:47:54.746982 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.892890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.893080 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.994738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.994902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:54.995010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.025498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"crc-debug-chqpb\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.069415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:55 crc kubenswrapper[4740]: W0216 13:47:55.179524 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f683e3c_2ae1_4a2a_a377_a8e489b9fdd6.slice/crio-e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461 WatchSource:0}: Error finding container e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461: Status 404 returned error can't find the container with id e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461 Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.294087 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acde68ab-8b2c-4d56-9743-27d87e4829d5" path="/var/lib/kubelet/pods/acde68ab-8b2c-4d56-9743-27d87e4829d5/volumes" Feb 16 13:47:55 crc kubenswrapper[4740]: I0216 13:47:55.432684 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-chqpb" event={"ID":"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6","Type":"ContainerStarted","Data":"e064400f35fc9bc91ee480b7f1778d0054ce2064ad8aa4c127bfbc965dccb461"} Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.442470 4740 generic.go:334] "Generic (PLEG): container finished" podID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerID="62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48" exitCode=0 Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.442569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-chqpb" event={"ID":"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6","Type":"ContainerDied","Data":"62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48"} Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.914750 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:56 crc kubenswrapper[4740]: I0216 13:47:56.922464 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-chqpb"] Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.558309 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") pod \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") pod \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\" (UID: \"6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6\") " Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.610575 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host" (OuterVolumeSpecName: "host") pod "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" (UID: "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.611322 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.617255 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt" (OuterVolumeSpecName: "kube-api-access-r45bt") pod "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" (UID: "6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6"). InnerVolumeSpecName "kube-api-access-r45bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:57 crc kubenswrapper[4740]: I0216 13:47:57.713711 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r45bt\" (UniqueName: \"kubernetes.io/projected/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6-kube-api-access-r45bt\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.192936 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:47:58 crc kubenswrapper[4740]: E0216 13:47:58.193382 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.193402 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.193637 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" containerName="container-00" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.194284 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.224269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.224333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.325857 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.325925 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.326156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.347716 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"crc-debug-vj8wt\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.461987 4740 scope.go:117] "RemoveContainer" containerID="62f428fad64651455731ee1510e88dff185287c13ccf29ddb69678db4711ae48" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.462062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-chqpb" Feb 16 13:47:58 crc kubenswrapper[4740]: I0216 13:47:58.510724 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.295413 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6" path="/var/lib/kubelet/pods/6f683e3c-2ae1-4a2a-a377-a8e489b9fdd6/volumes" Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472709 4740 generic.go:334] "Generic (PLEG): container finished" podID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerID="b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2" exitCode=0 Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472778 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-vj8wt" event={"ID":"6030077c-d3f7-4009-8ed6-f05b287984cb","Type":"ContainerDied","Data":"b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2"} Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.472805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/crc-debug-vj8wt" event={"ID":"6030077c-d3f7-4009-8ed6-f05b287984cb","Type":"ContainerStarted","Data":"3bd7ad1a62877701995657e797fe0b555f43bf058dc16cc503005fb1810502fa"} Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.506857 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:47:59 crc kubenswrapper[4740]: I0216 13:47:59.516014 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/crc-debug-vj8wt"] Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.583179 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.769552 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") pod \"6030077c-d3f7-4009-8ed6-f05b287984cb\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.769693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") pod \"6030077c-d3f7-4009-8ed6-f05b287984cb\" (UID: \"6030077c-d3f7-4009-8ed6-f05b287984cb\") " Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.770173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host" (OuterVolumeSpecName: "host") pod "6030077c-d3f7-4009-8ed6-f05b287984cb" (UID: "6030077c-d3f7-4009-8ed6-f05b287984cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.770312 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6030077c-d3f7-4009-8ed6-f05b287984cb-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.784007 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw" (OuterVolumeSpecName: "kube-api-access-5m2cw") pod "6030077c-d3f7-4009-8ed6-f05b287984cb" (UID: "6030077c-d3f7-4009-8ed6-f05b287984cb"). InnerVolumeSpecName "kube-api-access-5m2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:00 crc kubenswrapper[4740]: I0216 13:48:00.872764 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2cw\" (UniqueName: \"kubernetes.io/projected/6030077c-d3f7-4009-8ed6-f05b287984cb-kube-api-access-5m2cw\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.292131 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" path="/var/lib/kubelet/pods/6030077c-d3f7-4009-8ed6-f05b287984cb/volumes" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.491841 4740 scope.go:117] "RemoveContainer" containerID="b10bde59a968175480f62e78e4261a2e5bcc77ceb4a1eafa64ca26fa3980f4e2" Feb 16 13:48:01 crc kubenswrapper[4740]: I0216 13:48:01.491860 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/crc-debug-vj8wt" Feb 16 13:48:16 crc kubenswrapper[4740]: I0216 13:48:16.829176 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.013911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.023584 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.066081 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.235587 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.243262 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker-log/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.474871 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8_8e96214f-a46e-451a-97d9-d448c66826f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.540576 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-central-agent/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.585670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-notification-agent/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.663921 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/proxy-httpd/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.697967 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/sg-core/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.823503 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api/0.log" Feb 16 13:48:17 crc kubenswrapper[4740]: I0216 13:48:17.926328 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api-log/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.001107 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/cinder-scheduler/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.080428 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/probe/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.146479 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg_3691fefa-c161-4670-bae7-ddde074e2892/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.303476 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw_928b9f1f-3a42-47e3-b895-756f66452ebf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.392695 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.558179 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.642079 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g_fe15334d-14c1-4670-89fe-3b7d4864b782/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.660637 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/dnsmasq-dns/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.815425 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-httpd/0.log" Feb 16 13:48:18 crc kubenswrapper[4740]: I0216 13:48:18.870218 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.022744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-httpd/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.054777 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.247029 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.375525 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh_3e117ddc-9ff8-414d-859b-0a16b4846029/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.558787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon-log/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.653732 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-42525_bf3c8754-68ef-4956-a95b-c6751d81b5bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.878099 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_05c7ea6d-5a24-4b21-851c-e7d51fa61a38/kube-state-metrics/0.log" Feb 16 13:48:19 crc kubenswrapper[4740]: I0216 13:48:19.890344 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cc7d69b6f-dmv77_e68475b5-404f-48fc-a05a-ea18135e837c/keystone-api/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.256726 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjh65_2ab3e576-ab98-496c-a189-2e79796f9e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.607391 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-httpd/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.657776 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-api/0.log" Feb 16 13:48:20 crc kubenswrapper[4740]: I0216 13:48:20.837691 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w_3a7cecfd-1168-4187-a70c-7b2151ff214f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.401904 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-log/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.511310 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-api/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.598367 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_07256285-a907-4822-80dc-b5f5866d437f/nova-cell0-conductor-conductor/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.715442 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4465f42a-9c2a-4aa7-9e45-fa28f78cddd7/nova-cell1-conductor-conductor/0.log" Feb 16 13:48:21 crc kubenswrapper[4740]: I0216 13:48:21.784143 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_94da2ded-002e-4aa6-9828-404bee84c146/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.032855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lhwdj_58706e85-268c-4ce0-b1e4-82dd86872568/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.200155 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-log/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.425366 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e3ba9a19-9826-4c43-9907-8cd8f1a4272a/nova-scheduler-scheduler/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.588568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.775135 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:48:22 crc kubenswrapper[4740]: I0216 13:48:22.808782 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/galera/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.000424 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.211413 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-metadata/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.222939 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.253280 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/galera/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.432130 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4f78f448-6577-48d1-b077-01e42c14758c/openstackclient/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.548952 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b4j4m_ad1b2300-a42b-4a99-b186-7661bb410a36/openstack-network-exporter/0.log" Feb 16 13:48:23 crc kubenswrapper[4740]: I0216 13:48:23.679328 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.045944 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.049495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovs-vswitchd/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.139312 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.296502 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qnt79_04335a5d-7cac-4a47-982c-70cae9db69ff/ovn-controller/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.387656 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zzdbk_d66e0695-3544-4fd0-9d34-42bea96ea9de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.528440 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.622053 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/ovn-northd/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.731744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.798434 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/ovsdbserver-nb/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.928474 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/openstack-network-exporter/0.log" Feb 16 13:48:24 crc kubenswrapper[4740]: I0216 13:48:24.995566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/ovsdbserver-sb/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.228704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-api/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.256803 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-log/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.325352 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.523659 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/rabbitmq/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.548453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.616474 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.767853 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.795342 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/rabbitmq/0.log" Feb 16 13:48:25 crc kubenswrapper[4740]: I0216 13:48:25.896709 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g_9fa622a2-4774-4038-b9ec-ec4bc7f57a46/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.068889 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4c988_2abfe09c-2736-49b3-b4e5-fb0e30deb510/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.149090 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m_1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.326698 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-r8mds_981b1e60-57d5-4a6b-8531-3fd31dd46fa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.429921 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-87s8t_8c5c2438-cfba-41a9-b429-80c9ce563348/ssh-known-hosts-edpm-deployment/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.635906 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-server/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.813738 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-httpd/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.939840 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-auditor/0.log" Feb 16 13:48:26 crc kubenswrapper[4740]: I0216 13:48:26.946373 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4rgvg_8a769496-58ca-4540-9dc4-bd8df7e682fc/swift-ring-rebalance/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.004789 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-reaper/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.149020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.231456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.286333 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-auditor/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.333506 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.347690 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.485627 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-updater/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.732456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-auditor/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.775065 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-expirer/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.831730 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-replicator/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.888122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-server/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.959072 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-updater/0.log" Feb 16 13:48:27 crc kubenswrapper[4740]: I0216 13:48:27.995026 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/rsync/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.106798 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/swift-recon-cron/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.286590 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99lsn_590a1858-7b00-48c8-a2b4-dae7b652ed89/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.332544 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_90aac50c-27a6-4ebd-b207-d3bc439dc1fe/tempest-tests-tempest-tests-runner/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.470061 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4a270185-f419-49b5-aa81-b6d254269d2d/test-operator-logs-container/0.log" Feb 16 13:48:28 crc kubenswrapper[4740]: I0216 13:48:28.523109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w42sv_5add9653-c644-42d7-bd4d-10ecb8f84a90/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.090356 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:31 crc kubenswrapper[4740]: E0216 13:48:31.090878 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.090894 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.091083 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6030077c-d3f7-4009-8ed6-f05b287984cb" containerName="container-00" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.092516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.100041 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132823 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.132951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234202 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234306 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.234857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.256112 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"redhat-operators-9wrfg\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:31 crc kubenswrapper[4740]: I0216 13:48:31.410013 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.234394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897316 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" exitCode=0 Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734"} Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.897647 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"73ae9f14263612def2e0943ea4c2fc593503fdf00ace78a46c28b24e9e4e133b"} Feb 16 13:48:32 crc kubenswrapper[4740]: I0216 13:48:32.907186 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:32.916560 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:32.959947 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423490 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.423551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527517 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.527631 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.528757 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.529125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.577548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"certified-operators-8mx6k\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:33 crc kubenswrapper[4740]: I0216 13:48:33.756131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.433048 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:34 crc kubenswrapper[4740]: W0216 13:48:34.448626 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe664efb_cef2_414d_a946_72a7cc4afd4c.slice/crio-7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3 WatchSource:0}: Error finding container 7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3: Status 404 returned error can't find the container with id 7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3 Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.968181 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.973980 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9" exitCode=0 Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.974038 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9"} Feb 16 13:48:34 crc kubenswrapper[4740]: I0216 13:48:34.974071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3"} Feb 16 13:48:35 crc kubenswrapper[4740]: I0216 13:48:35.987105 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" exitCode=0 Feb 16 13:48:35 crc kubenswrapper[4740]: I0216 13:48:35.987446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} Feb 16 13:48:36 crc kubenswrapper[4740]: I0216 13:48:36.540863 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_16622824-15d7-4ff1-8eac-85fe5d8da9db/memcached/0.log" Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.018574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerStarted","Data":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.020290 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0"} Feb 16 13:48:40 crc kubenswrapper[4740]: I0216 13:48:40.049525 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wrfg" podStartSLOduration=2.5543570669999998 podStartE2EDuration="9.049506177s" podCreationTimestamp="2026-02-16 13:48:31 +0000 UTC" firstStartedPulling="2026-02-16 13:48:32.902828754 +0000 UTC m=+3340.279177485" lastFinishedPulling="2026-02-16 13:48:39.397977874 +0000 UTC m=+3346.774326595" observedRunningTime="2026-02-16 13:48:40.049197847 +0000 UTC m=+3347.425546558" watchObservedRunningTime="2026-02-16 13:48:40.049506177 +0000 UTC m=+3347.425854898" Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.030535 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0" exitCode=0 Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.030654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0"} Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.410824 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:41 crc kubenswrapper[4740]: I0216 13:48:41.410870 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.077069 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerStarted","Data":"2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2"} Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.113257 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8mx6k" podStartSLOduration=3.6422352890000003 podStartE2EDuration="10.113238127s" podCreationTimestamp="2026-02-16 13:48:32 +0000 UTC" firstStartedPulling="2026-02-16 13:48:34.976044339 +0000 UTC m=+3342.352393060" lastFinishedPulling="2026-02-16 13:48:41.447047187 +0000 UTC m=+3348.823395898" observedRunningTime="2026-02-16 13:48:42.105161865 +0000 UTC m=+3349.481510586" watchObservedRunningTime="2026-02-16 13:48:42.113238127 +0000 UTC m=+3349.489586848" Feb 16 13:48:42 crc kubenswrapper[4740]: I0216 13:48:42.485171 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wrfg" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" probeResult="failure" output=< Feb 16 13:48:42 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:48:42 crc kubenswrapper[4740]: > Feb 16 13:48:43 crc kubenswrapper[4740]: I0216 13:48:43.756839 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:43 crc kubenswrapper[4740]: I0216 13:48:43.756908 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:44 crc kubenswrapper[4740]: I0216 13:48:44.809273 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8mx6k" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" probeResult="failure" output=< Feb 16 13:48:44 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 13:48:44 crc kubenswrapper[4740]: > Feb 16 13:48:45 crc kubenswrapper[4740]: I0216 13:48:45.575071 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:48:45 crc kubenswrapper[4740]: I0216 13:48:45.575477 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.491655 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.535031 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:51 crc kubenswrapper[4740]: I0216 13:48:51.743449 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.167485 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wrfg" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" containerID="cri-o://da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" gracePeriod=2 Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.680642 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.801444 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841505 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.841770 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") pod \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\" (UID: \"f8ef2ee5-6259-45b0-9e5e-b34778f39415\") " Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.842783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities" (OuterVolumeSpecName: "utilities") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.853531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.860445 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67" (OuterVolumeSpecName: "kube-api-access-9wr67") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "kube-api-access-9wr67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.945634 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.945682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wr67\" (UniqueName: \"kubernetes.io/projected/f8ef2ee5-6259-45b0-9e5e-b34778f39415-kube-api-access-9wr67\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:53 crc kubenswrapper[4740]: I0216 13:48:53.984389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8ef2ee5-6259-45b0-9e5e-b34778f39415" (UID: "f8ef2ee5-6259-45b0-9e5e-b34778f39415"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.047441 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ef2ee5-6259-45b0-9e5e-b34778f39415-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.176949 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" exitCode=0 Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177020 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wrfg" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177015 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wrfg" event={"ID":"f8ef2ee5-6259-45b0-9e5e-b34778f39415","Type":"ContainerDied","Data":"73ae9f14263612def2e0943ea4c2fc593503fdf00ace78a46c28b24e9e4e133b"} Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.177098 4740 scope.go:117] "RemoveContainer" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.204336 4740 scope.go:117] "RemoveContainer" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.216122 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.226703 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wrfg"] Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.231977 4740 scope.go:117] "RemoveContainer" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.287746 4740 scope.go:117] "RemoveContainer" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288128 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": container with ID starting with da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774 not found: ID does not exist" containerID="da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288159 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774"} err="failed to get container status \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": rpc error: code = NotFound desc = could not find container \"da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774\": container with ID starting with da59874b37a9714fe4f56dc2d7d12c70cba86772e20354229f33f63e2a241774 not found: ID does not exist" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288179 4740 scope.go:117] "RemoveContainer" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288580 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": container with ID starting with 75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6 not found: ID does not exist" containerID="75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288613 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6"} err="failed to get container status \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": rpc error: code = NotFound desc = could not find container \"75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6\": container with ID starting with 75652d9361cb5f4454317c105488be6eaca4fef6d9b51af285e9b41d996af0e6 not found: ID does not exist" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.288634 4740 scope.go:117] "RemoveContainer" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: E0216 13:48:54.288999 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": container with ID starting with 5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734 not found: ID does not exist" containerID="5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734" Feb 16 13:48:54 crc kubenswrapper[4740]: I0216 13:48:54.289035 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734"} err="failed to get container status \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": rpc error: code = NotFound desc = could not find container \"5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734\": container with ID starting with 5804739ce569132d6c34a9db40c5629d10e91065c6b64c59d3ab4c4ddde91734 not found: ID does not exist" Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.293322 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" path="/var/lib/kubelet/pods/f8ef2ee5-6259-45b0-9e5e-b34778f39415/volumes" Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.944796 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:55 crc kubenswrapper[4740]: I0216 13:48:55.947146 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8mx6k" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" containerID="cri-o://2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" gracePeriod=2 Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.199762 4740 generic.go:334] "Generic (PLEG): container finished" podID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerID="2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" exitCode=0 Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.199851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2"} Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.427652 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502486 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.502744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") pod \"be664efb-cef2-414d-a946-72a7cc4afd4c\" (UID: \"be664efb-cef2-414d-a946-72a7cc4afd4c\") " Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.525436 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities" (OuterVolumeSpecName: "utilities") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.538203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6" (OuterVolumeSpecName: "kube-api-access-5xtj6") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "kube-api-access-5xtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.605057 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtj6\" (UniqueName: \"kubernetes.io/projected/be664efb-cef2-414d-a946-72a7cc4afd4c-kube-api-access-5xtj6\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.605101 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.606321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be664efb-cef2-414d-a946-72a7cc4afd4c" (UID: "be664efb-cef2-414d-a946-72a7cc4afd4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:56 crc kubenswrapper[4740]: I0216 13:48:56.706334 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be664efb-cef2-414d-a946-72a7cc4afd4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.211972 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mx6k" event={"ID":"be664efb-cef2-414d-a946-72a7cc4afd4c","Type":"ContainerDied","Data":"7b60619e007d8dc98b26c2f5889abe357fd7b89615d8c218a01b0c0235e9c0e3"} Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.212305 4740 scope.go:117] "RemoveContainer" containerID="2d89f440ef0d6a943923926768e072498d8fb84ee875dab02191bdd2ba08bde2" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.212473 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mx6k" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.236964 4740 scope.go:117] "RemoveContainer" containerID="f697c3628ef44f92eb26076f11f5ddfc27c6237487725e570f999862ed89e7b0" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.262773 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.265541 4740 scope.go:117] "RemoveContainer" containerID="06ffda890106bfcf9095d45de62532e6e9f1bfc381b67688292f8899730cb6b9" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.271845 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8mx6k"] Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.294988 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" path="/var/lib/kubelet/pods/be664efb-cef2-414d-a946-72a7cc4afd4c/volumes" Feb 16 13:48:57 crc kubenswrapper[4740]: I0216 13:48:57.936745 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.101464 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.148320 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.228173 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.336213 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.353910 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/extract/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.380453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:48:58 crc kubenswrapper[4740]: I0216 13:48:58.919998 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-9kqqk_069bdc0e-d9e1-4e93-a6fc-8aa439550dd0/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.272196 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-9xbzr_90321508-9bb9-458e-ada0-001c779161c1/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.365659 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kk4mh_7f22cc6e-3761-4336-ab1d-74d9fd88432c/manager/0.log" Feb 16 13:48:59 crc kubenswrapper[4740]: I0216 13:48:59.624708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-nl26x_fdf72675-c282-4f45-ad93-19aa643dcff8/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.019010 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-rpbmb_f0032304-8799-4a85-964f-2017bfd2dbc8/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.261453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-v28lz_3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.276431 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-s8wc5_4eba30c7-3dab-4b8f-8a22-2dae642a6ac5/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.509704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-z2m7j_fce48c02-3aa2-404b-a9a4-7ba789835be0/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.560758 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-44wdn_7f932811-4449-440a-b4c7-4817bfb33dd3/manager/0.log" Feb 16 13:49:00 crc kubenswrapper[4740]: I0216 13:49:00.794382 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7gw4t_a49c1d67-8cf7-4429-ac73-da13d129304d/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.295595 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-7t92r_121ee83b-e7f1-4302-9455-4cc6f53a07a5/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.444535 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-fn4g2_ba6767b2-e03c-4c12-880d-90bd809d9b48/manager/0.log" Feb 16 13:49:01 crc kubenswrapper[4740]: I0216 13:49:01.734415 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7_76134787-0eff-47bd-982e-16c2c4f98f19/manager/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.327259 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f746469c7-kzds7_4c82699a-266c-43ce-acce-32c8aea26c10/operator/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.527147 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qzt4t_7fe65e33-ae2e-4f40-b686-454192d6b538/registry-server/0.log" Feb 16 13:49:02 crc kubenswrapper[4740]: I0216 13:49:02.821632 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-gclp4_6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.053501 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-64xmt_c6400043-1325-4af3-8c79-4b383441668c/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.153859 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pbpdw_00e4da3c-6d3d-459a-86c2-01a4cdb81e51/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.262775 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qttct_3e6434b1-64ba-481f-b001-8a465254dc0a/operator/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.364867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6865b_519c5b9e-ed4f-4cba-a731-70a22209f642/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.659016 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cnxhk_04f86073-3515-4d62-a02a-c63d06ecdaaa/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.816631 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-58cw4_7666c640-a9f4-4e09-b79c-7fd31116bd79/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.911491 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-pbkbj_001719d5-3a51-4f6b-b316-9e98f53ed575/manager/0.log" Feb 16 13:49:03 crc kubenswrapper[4740]: I0216 13:49:03.941346 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd688d8fc-7shgl_e749615e-a716-4e6e-8830-947b128e4e58/manager/0.log" Feb 16 13:49:05 crc kubenswrapper[4740]: I0216 13:49:05.297279 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jsfjx_d6090007-0c13-4ea2-823c-3d95bb336fd8/manager/0.log" Feb 16 13:49:15 crc kubenswrapper[4740]: I0216 13:49:15.575368 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:49:15 crc kubenswrapper[4740]: I0216 13:49:15.575999 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.297164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m9krp_2eef055f-7504-4f20-817e-afcd1bb6f996/control-plane-machine-set-operator/0.log" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.477049 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/machine-api-operator/0.log" Feb 16 13:49:25 crc kubenswrapper[4740]: I0216 13:49:25.511786 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/kube-rbac-proxy/0.log" Feb 16 13:49:36 crc kubenswrapper[4740]: I0216 13:49:36.977078 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kflg5_8b35e0e1-44f6-4481-a71e-98e3f8462bb7/cert-manager-controller/0.log" Feb 16 13:49:37 crc kubenswrapper[4740]: I0216 13:49:37.108216 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hpjbh_beeada69-65c5-434a-af02-8e6b23e13138/cert-manager-cainjector/0.log" Feb 16 13:49:37 crc kubenswrapper[4740]: I0216 13:49:37.166429 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25fnr_a68020b3-17ff-43dc-b17d-0845940c0758/cert-manager-webhook/0.log" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.574905 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.575536 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.575588 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.576391 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.576471 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" gracePeriod=600 Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.793617 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" exitCode=0 Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.793690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c"} Feb 16 13:49:45 crc kubenswrapper[4740]: I0216 13:49:45.794139 4740 scope.go:117] "RemoveContainer" containerID="2a9cabcd164b4ec6e8bb88a209b2ba2120f55519d3dc2ac76b5ce2cd3b1064b0" Feb 16 13:49:46 crc kubenswrapper[4740]: I0216 13:49:46.805691 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.563466 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nrnvc_edcdba40-6318-4d29-a235-829e94bc8089/nmstate-console-plugin/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.733885 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v88gn_3c0ee084-492b-46da-82b3-9c9a8e1715fd/nmstate-handler/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.801006 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/kube-rbac-proxy/0.log" Feb 16 13:49:48 crc kubenswrapper[4740]: I0216 13:49:48.852009 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/nmstate-metrics/0.log" Feb 16 13:49:49 crc kubenswrapper[4740]: I0216 13:49:49.001504 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-76m6k_afdcb81a-db2a-4c04-b73b-30facf2d10af/nmstate-operator/0.log" Feb 16 13:49:49 crc kubenswrapper[4740]: I0216 13:49:49.077202 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-r9sw6_b7ffd056-af44-4007-8de6-cc707902d4c4/nmstate-webhook/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.492180 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/kube-rbac-proxy/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.659374 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.671950 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/controller/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859155 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859389 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.859787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:13 crc kubenswrapper[4740]: I0216 13:50:13.936082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.089214 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.090046 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.099636 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.113607 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.274528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.282655 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.293022 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.305166 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/controller/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.432856 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr-metrics/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.436459 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.564312 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy-frr/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.630571 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/reloader/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.853171 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-spwnh_2e220608-2271-4260-bc94-e4d206c718d4/frr-k8s-webhook-server/0.log" Feb 16 13:50:14 crc kubenswrapper[4740]: I0216 13:50:14.909222 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75b694c59-wkpkw_97f25eec-68aa-4b48-b40a-08ce0599d525/manager/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.113865 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7887f4bfcc-9grrx_4163a038-60ca-4e8e-bf45-028b04101fc9/webhook-server/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.314504 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/kube-rbac-proxy/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.746751 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/speaker/0.log" Feb 16 13:50:15 crc kubenswrapper[4740]: I0216 13:50:15.825163 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.245109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.485204 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.498027 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.519383 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.697708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.721254 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.740958 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/extract/0.log" Feb 16 13:50:28 crc kubenswrapper[4740]: I0216 13:50:28.861806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.064086 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.073721 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.099173 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.287942 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.298023 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.500180 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.779203 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/registry-server/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.799166 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.808520 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:29 crc kubenswrapper[4740]: I0216 13:50:29.824820 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.026571 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.032169 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.237586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.474568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/registry-server/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.483260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.485026 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.499972 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.671071 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.679066 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.696678 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/extract/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.865528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xsssg_db2dd193-ab4e-4011-988a-d516f2da367e/marketplace-operator/0.log" Feb 16 13:50:30 crc kubenswrapper[4740]: I0216 13:50:30.876128 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.063109 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.064578 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.087920 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.233845 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.250354 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.368096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/registry-server/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.428032 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.616347 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.653525 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.653891 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.865671 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:50:31 crc kubenswrapper[4740]: I0216 13:50:31.916638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:50:32 crc kubenswrapper[4740]: I0216 13:50:32.257210 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/registry-server/0.log" Feb 16 13:51:45 crc kubenswrapper[4740]: I0216 13:51:45.575861 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:51:45 crc kubenswrapper[4740]: I0216 13:51:45.576416 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:15 crc kubenswrapper[4740]: I0216 13:52:15.574957 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:52:15 crc kubenswrapper[4740]: I0216 13:52:15.577280 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.110805 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" exitCode=0 Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.110864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8952r/must-gather-5m4h9" event={"ID":"f7facfd3-bee7-437b-9628-e135acc0d16a","Type":"ContainerDied","Data":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.113188 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:17 crc kubenswrapper[4740]: I0216 13:52:17.553090 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/gather/0.log" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.850425 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851528 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851543 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851566 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851575 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851590 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851599 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851620 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851627 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="extract-content" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851645 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851653 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="extract-utilities" Feb 16 13:52:25 crc kubenswrapper[4740]: E0216 13:52:25.851665 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851673 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851911 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="be664efb-cef2-414d-a946-72a7cc4afd4c" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.851932 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ef2ee5-6259-45b0-9e5e-b34778f39415" containerName="registry-server" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.853677 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.863973 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:25 crc kubenswrapper[4740]: I0216 13:52:25.964562 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.065921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.066733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.094426 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"community-operators-2bf8z\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.172637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.265435 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.265894 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8952r/must-gather-5m4h9" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" containerID="cri-o://ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" gracePeriod=2 Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.274160 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8952r/must-gather-5m4h9"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.792697 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.841621 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/copy/0.log" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.842107 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:52:26 crc kubenswrapper[4740]: I0216 13:52:26.998891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") pod \"f7facfd3-bee7-437b-9628-e135acc0d16a\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:26.999966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") pod \"f7facfd3-bee7-437b-9628-e135acc0d16a\" (UID: \"f7facfd3-bee7-437b-9628-e135acc0d16a\") " Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.004536 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97" (OuterVolumeSpecName: "kube-api-access-n5w97") pod "f7facfd3-bee7-437b-9628-e135acc0d16a" (UID: "f7facfd3-bee7-437b-9628-e135acc0d16a"). InnerVolumeSpecName "kube-api-access-n5w97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.101909 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5w97\" (UniqueName: \"kubernetes.io/projected/f7facfd3-bee7-437b-9628-e135acc0d16a-kube-api-access-n5w97\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.144729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f7facfd3-bee7-437b-9628-e135acc0d16a" (UID: "f7facfd3-bee7-437b-9628-e135acc0d16a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.201860 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" exitCode=0 Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.201948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454"} Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.202012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"bacf2e1b1d4c403ec4e0b591dbf8f6a87d501268512564c30a6cd79804fa2c73"} Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.203327 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7facfd3-bee7-437b-9628-e135acc0d16a-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.205466 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.205586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8952r_must-gather-5m4h9_f7facfd3-bee7-437b-9628-e135acc0d16a/copy/0.log" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206313 4740 generic.go:334] "Generic (PLEG): container finished" podID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" exitCode=143 Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206377 4740 scope.go:117] "RemoveContainer" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.206428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8952r/must-gather-5m4h9" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.298007 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.304738 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" path="/var/lib/kubelet/pods/f7facfd3-bee7-437b-9628-e135acc0d16a/volumes" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.465997 4740 scope.go:117] "RemoveContainer" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.466429 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": container with ID starting with ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f not found: ID does not exist" containerID="ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466493 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f"} err="failed to get container status \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": rpc error: code = NotFound desc = could not find container \"ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f\": container with ID starting with ec4efb17983ccf904b0db83d6cd85fb3c3377bf31e51f847d277853cc9acc96f not found: ID does not exist" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466519 4740 scope.go:117] "RemoveContainer" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.466877 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": container with ID starting with 934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80 not found: ID does not exist" containerID="934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.466928 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80"} err="failed to get container status \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": rpc error: code = NotFound desc = could not find container \"934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80\": container with ID starting with 934fc64035fca132426cf1dff99b4a0f908773f13878918e2c612e3f2759cd80 not found: ID does not exist" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.659482 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.660026 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660056 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: E0216 13:52:27.660088 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660101 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660361 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="gather" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.660398 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7facfd3-bee7-437b-9628-e135acc0d16a" containerName="copy" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.662678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.676972 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.817507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.919898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.943131 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"redhat-marketplace-mhfmz\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:27 crc kubenswrapper[4740]: I0216 13:52:27.995298 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:28 crc kubenswrapper[4740]: I0216 13:52:28.225261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} Feb 16 13:52:28 crc kubenswrapper[4740]: I0216 13:52:28.531978 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239094 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" exitCode=0 Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5"} Feb 16 13:52:29 crc kubenswrapper[4740]: I0216 13:52:29.239555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerStarted","Data":"bf712a5dae40b5da6a8e2079a8e234db26f56cc4f02bd1cd509ede7dd909ba9a"} Feb 16 13:52:30 crc kubenswrapper[4740]: I0216 13:52:30.249829 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" exitCode=0 Feb 16 13:52:30 crc kubenswrapper[4740]: I0216 13:52:30.249857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.260457 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" exitCode=0 Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.260509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.263443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerStarted","Data":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} Feb 16 13:52:31 crc kubenswrapper[4740]: I0216 13:52:31.299882 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2bf8z" podStartSLOduration=2.836956883 podStartE2EDuration="6.299863799s" podCreationTimestamp="2026-02-16 13:52:25 +0000 UTC" firstStartedPulling="2026-02-16 13:52:27.204750268 +0000 UTC m=+3574.581098989" lastFinishedPulling="2026-02-16 13:52:30.667657184 +0000 UTC m=+3578.044005905" observedRunningTime="2026-02-16 13:52:31.294053498 +0000 UTC m=+3578.670402219" watchObservedRunningTime="2026-02-16 13:52:31.299863799 +0000 UTC m=+3578.676212510" Feb 16 13:52:32 crc kubenswrapper[4740]: I0216 13:52:32.284154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerStarted","Data":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} Feb 16 13:52:32 crc kubenswrapper[4740]: I0216 13:52:32.307732 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhfmz" podStartSLOduration=2.867420351 podStartE2EDuration="5.307711317s" podCreationTimestamp="2026-02-16 13:52:27 +0000 UTC" firstStartedPulling="2026-02-16 13:52:29.241953245 +0000 UTC m=+3576.618301966" lastFinishedPulling="2026-02-16 13:52:31.682244211 +0000 UTC m=+3579.058592932" observedRunningTime="2026-02-16 13:52:32.300966296 +0000 UTC m=+3579.677315027" watchObservedRunningTime="2026-02-16 13:52:32.307711317 +0000 UTC m=+3579.684060038" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.172856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.173404 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.227417 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:36 crc kubenswrapper[4740]: I0216 13:52:36.413773 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:37 crc kubenswrapper[4740]: I0216 13:52:37.996028 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:37 crc kubenswrapper[4740]: I0216 13:52:37.996411 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:38 crc kubenswrapper[4740]: I0216 13:52:38.040563 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:38 crc kubenswrapper[4740]: I0216 13:52:38.410476 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:40 crc kubenswrapper[4740]: I0216 13:52:40.843927 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:40 crc kubenswrapper[4740]: I0216 13:52:40.844308 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2bf8z" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" containerID="cri-o://2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" gracePeriod=2 Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.290718 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380361 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380578 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.380630 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") pod \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\" (UID: \"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba\") " Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.382611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities" (OuterVolumeSpecName: "utilities") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388244 4740 generic.go:334] "Generic (PLEG): container finished" podID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" exitCode=0 Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bf8z" event={"ID":"7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba","Type":"ContainerDied","Data":"bacf2e1b1d4c403ec4e0b591dbf8f6a87d501268512564c30a6cd79804fa2c73"} Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388384 4740 scope.go:117] "RemoveContainer" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.388388 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bf8z" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.389241 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf" (OuterVolumeSpecName: "kube-api-access-qndbf") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "kube-api-access-qndbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.442350 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" (UID: "7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.445258 4740 scope.go:117] "RemoveContainer" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.468250 4740 scope.go:117] "RemoveContainer" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482823 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qndbf\" (UniqueName: \"kubernetes.io/projected/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-kube-api-access-qndbf\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482858 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.482868 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.526465 4740 scope.go:117] "RemoveContainer" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527236 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": container with ID starting with 2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e not found: ID does not exist" containerID="2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527271 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e"} err="failed to get container status \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": rpc error: code = NotFound desc = could not find container \"2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e\": container with ID starting with 2d5262e8f91ab5f8a888b2c57fa584cb0088d7550a1bc05e8e9103ad69d5f65e not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527299 4740 scope.go:117] "RemoveContainer" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527592 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": container with ID starting with 536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1 not found: ID does not exist" containerID="536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527622 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1"} err="failed to get container status \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": rpc error: code = NotFound desc = could not find container \"536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1\": container with ID starting with 536cb39b616c97e4d45fcf8480e011a47a38e02354495d0265be167b3b6abdc1 not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527638 4740 scope.go:117] "RemoveContainer" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: E0216 13:52:41.527876 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": container with ID starting with ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454 not found: ID does not exist" containerID="ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.527900 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454"} err="failed to get container status \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": rpc error: code = NotFound desc = could not find container \"ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454\": container with ID starting with ef81b6cf356046be2609c8232531e3022a071eceab0784b3f6b37ad42316b454 not found: ID does not exist" Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.733321 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:41 crc kubenswrapper[4740]: I0216 13:52:41.741637 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2bf8z"] Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.456019 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.456267 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhfmz" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" containerID="cri-o://48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" gracePeriod=2 Feb 16 13:52:42 crc kubenswrapper[4740]: E0216 13:52:42.661335 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c2b3e3_9a6e_4895_841d_f8be511fec31.slice/crio-48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:52:42 crc kubenswrapper[4740]: I0216 13:52:42.923049 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013709 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013777 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.013889 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") pod \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\" (UID: \"f6c2b3e3-9a6e-4895-841d-f8be511fec31\") " Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.014708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities" (OuterVolumeSpecName: "utilities") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.021284 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx" (OuterVolumeSpecName: "kube-api-access-ggpbx") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "kube-api-access-ggpbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.041516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c2b3e3-9a6e-4895-841d-f8be511fec31" (UID: "f6c2b3e3-9a6e-4895-841d-f8be511fec31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116033 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116069 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c2b3e3-9a6e-4895-841d-f8be511fec31-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.116086 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggpbx\" (UniqueName: \"kubernetes.io/projected/f6c2b3e3-9a6e-4895-841d-f8be511fec31-kube-api-access-ggpbx\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.294381 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" path="/var/lib/kubelet/pods/7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba/volumes" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420106 4740 generic.go:334] "Generic (PLEG): container finished" podID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" exitCode=0 Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420159 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhfmz" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhfmz" event={"ID":"f6c2b3e3-9a6e-4895-841d-f8be511fec31","Type":"ContainerDied","Data":"bf712a5dae40b5da6a8e2079a8e234db26f56cc4f02bd1cd509ede7dd909ba9a"} Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.420226 4740 scope.go:117] "RemoveContainer" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.446696 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.464660 4740 scope.go:117] "RemoveContainer" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.464899 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhfmz"] Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.487452 4740 scope.go:117] "RemoveContainer" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531116 4740 scope.go:117] "RemoveContainer" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.531786 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": container with ID starting with 48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed not found: ID does not exist" containerID="48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531845 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed"} err="failed to get container status \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": rpc error: code = NotFound desc = could not find container \"48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed\": container with ID starting with 48f2d1082070d3365babde0fd9be345de2e3147fbc14622cc6e62769e04ae9ed not found: ID does not exist" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.531871 4740 scope.go:117] "RemoveContainer" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.532367 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": container with ID starting with 9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0 not found: ID does not exist" containerID="9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532398 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0"} err="failed to get container status \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": rpc error: code = NotFound desc = could not find container \"9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0\": container with ID starting with 9253541625c45cc84806ed77247db17c6e723e1dcbd3f7eb4ecdbcdf04a0d3b0 not found: ID does not exist" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532416 4740 scope.go:117] "RemoveContainer" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: E0216 13:52:43.532824 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": container with ID starting with e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5 not found: ID does not exist" containerID="e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5" Feb 16 13:52:43 crc kubenswrapper[4740]: I0216 13:52:43.532851 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5"} err="failed to get container status \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": rpc error: code = NotFound desc = could not find container \"e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5\": container with ID starting with e70b6ccc85aecedea7320ab4c958a4cbe1c13e18c7305c238da963f91f63b4b5 not found: ID does not exist" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.297290 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" path="/var/lib/kubelet/pods/f6c2b3e3-9a6e-4895-841d-f8be511fec31/volumes" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575416 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575477 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.575514 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.576222 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:52:45 crc kubenswrapper[4740]: I0216 13:52:45.576277 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" gracePeriod=600 Feb 16 13:52:45 crc kubenswrapper[4740]: E0216 13:52:45.722102 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451505 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" exitCode=0 Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451577 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9"} Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.451656 4740 scope.go:117] "RemoveContainer" containerID="5103e62a09cfb2e79de97d3b0159807e215ce8acf6944009796514afbe04214c" Feb 16 13:52:46 crc kubenswrapper[4740]: I0216 13:52:46.452275 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:52:46 crc kubenswrapper[4740]: E0216 13:52:46.452623 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:52:58 crc kubenswrapper[4740]: I0216 13:52:58.281948 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:52:58 crc kubenswrapper[4740]: E0216 13:52:58.283085 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:12 crc kubenswrapper[4740]: I0216 13:53:12.281965 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:12 crc kubenswrapper[4740]: E0216 13:53:12.283239 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:27 crc kubenswrapper[4740]: I0216 13:53:27.283082 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:27 crc kubenswrapper[4740]: E0216 13:53:27.284441 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:41 crc kubenswrapper[4740]: I0216 13:53:41.281887 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:41 crc kubenswrapper[4740]: E0216 13:53:41.282975 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:53:56 crc kubenswrapper[4740]: I0216 13:53:56.280698 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:53:56 crc kubenswrapper[4740]: E0216 13:53:56.281565 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:06 crc kubenswrapper[4740]: I0216 13:54:06.684010 4740 scope.go:117] "RemoveContainer" containerID="63f6e3806b9f59c7b120289d4987091581c3ebc8ebf7e0c0bf27263c9e0eeb9f" Feb 16 13:54:11 crc kubenswrapper[4740]: I0216 13:54:11.281713 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:11 crc kubenswrapper[4740]: E0216 13:54:11.282722 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:24 crc kubenswrapper[4740]: I0216 13:54:24.281802 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:24 crc kubenswrapper[4740]: E0216 13:54:24.283039 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:37 crc kubenswrapper[4740]: I0216 13:54:37.282499 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:37 crc kubenswrapper[4740]: E0216 13:54:37.283646 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:54:51 crc kubenswrapper[4740]: I0216 13:54:51.281850 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:54:51 crc kubenswrapper[4740]: E0216 13:54:51.282763 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:05 crc kubenswrapper[4740]: I0216 13:55:05.282924 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:05 crc kubenswrapper[4740]: E0216 13:55:05.284480 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:17 crc kubenswrapper[4740]: I0216 13:55:17.281983 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:17 crc kubenswrapper[4740]: E0216 13:55:17.284913 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758907 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758919 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758936 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758951 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758958 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758970 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758977 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-content" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.758991 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.758997 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: E0216 13:55:20.759031 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759038 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="extract-utilities" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759270 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfe6b0f-a83c-4a7d-b9a1-1cebabbb0aba" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.759292 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c2b3e3-9a6e-4895-841d-f8be511fec31" containerName="registry-server" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.760270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762520 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2mt6s"/"openshift-service-ca.crt" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2mt6s"/"default-dockercfg-jlmx2" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.762800 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2mt6s"/"kube-root-ca.crt" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.773107 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.773158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.782057 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.875642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:20 crc kubenswrapper[4740]: I0216 13:55:20.904123 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"must-gather-88jwg\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.093353 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.570228 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.916318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23"} Feb 16 13:55:21 crc kubenswrapper[4740]: I0216 13:55:21.916667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"bce4b8bf3a7474d0370ba27f79c31d25416fb471e68cabe14bceb14676468ad5"} Feb 16 13:55:22 crc kubenswrapper[4740]: I0216 13:55:22.926559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerStarted","Data":"df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74"} Feb 16 13:55:22 crc kubenswrapper[4740]: I0216 13:55:22.948278 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2mt6s/must-gather-88jwg" podStartSLOduration=2.948257947 podStartE2EDuration="2.948257947s" podCreationTimestamp="2026-02-16 13:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:22.943675004 +0000 UTC m=+3750.320023735" watchObservedRunningTime="2026-02-16 13:55:22.948257947 +0000 UTC m=+3750.324606668" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.267766 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.270046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.353347 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.353969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.455862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.455999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.456363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.478566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"crc-debug-pnpgz\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.600084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:55:25 crc kubenswrapper[4740]: W0216 13:55:25.625995 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76887d5b_6380_4648_940a_bb025db77fc9.slice/crio-75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243 WatchSource:0}: Error finding container 75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243: Status 404 returned error can't find the container with id 75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243 Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.952959 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerStarted","Data":"5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1"} Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.953546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerStarted","Data":"75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243"} Feb 16 13:55:25 crc kubenswrapper[4740]: I0216 13:55:25.984496 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" podStartSLOduration=0.98447209 podStartE2EDuration="984.47209ms" podCreationTimestamp="2026-02-16 13:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:25.974449608 +0000 UTC m=+3753.350798329" watchObservedRunningTime="2026-02-16 13:55:25.98447209 +0000 UTC m=+3753.360820811" Feb 16 13:55:29 crc kubenswrapper[4740]: I0216 13:55:29.282289 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:29 crc kubenswrapper[4740]: E0216 13:55:29.284214 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:40 crc kubenswrapper[4740]: I0216 13:55:40.281277 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:40 crc kubenswrapper[4740]: E0216 13:55:40.282160 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:55:52 crc kubenswrapper[4740]: I0216 13:55:52.281375 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:55:52 crc kubenswrapper[4740]: E0216 13:55:52.282068 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:00 crc kubenswrapper[4740]: I0216 13:56:00.248301 4740 generic.go:334] "Generic (PLEG): container finished" podID="76887d5b-6380-4648-940a-bb025db77fc9" containerID="5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1" exitCode=0 Feb 16 13:56:00 crc kubenswrapper[4740]: I0216 13:56:00.248400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" event={"ID":"76887d5b-6380-4648-940a-bb025db77fc9","Type":"ContainerDied","Data":"5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1"} Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.374483 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.414506 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.431803 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-pnpgz"] Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") pod \"76887d5b-6380-4648-940a-bb025db77fc9\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") pod \"76887d5b-6380-4648-940a-bb025db77fc9\" (UID: \"76887d5b-6380-4648-940a-bb025db77fc9\") " Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.519635 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host" (OuterVolumeSpecName: "host") pod "76887d5b-6380-4648-940a-bb025db77fc9" (UID: "76887d5b-6380-4648-940a-bb025db77fc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.520202 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76887d5b-6380-4648-940a-bb025db77fc9-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.531479 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf" (OuterVolumeSpecName: "kube-api-access-4vjrf") pod "76887d5b-6380-4648-940a-bb025db77fc9" (UID: "76887d5b-6380-4648-940a-bb025db77fc9"). InnerVolumeSpecName "kube-api-access-4vjrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4740]: I0216 13:56:01.626656 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjrf\" (UniqueName: \"kubernetes.io/projected/76887d5b-6380-4648-940a-bb025db77fc9-kube-api-access-4vjrf\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.291348 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ebbbed34a98b51ae9041e6280ffcb089ea433f15e2db608eb900ba126bb243" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.291754 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-pnpgz" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.621657 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:02 crc kubenswrapper[4740]: E0216 13:56:02.622073 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622257 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="76887d5b-6380-4648-940a-bb025db77fc9" containerName="container-00" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.622875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.744842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.744938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846606 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.846826 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.871583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"crc-debug-tm9fr\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:02 crc kubenswrapper[4740]: I0216 13:56:02.939273 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.293503 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76887d5b-6380-4648-940a-bb025db77fc9" path="/var/lib/kubelet/pods/76887d5b-6380-4648-940a-bb025db77fc9/volumes" Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299570 4740 generic.go:334] "Generic (PLEG): container finished" podID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerID="1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294" exitCode=0 Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" event={"ID":"f1f88001-ae96-4762-b792-df1ad7dc11fa","Type":"ContainerDied","Data":"1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294"} Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.299665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" event={"ID":"f1f88001-ae96-4762-b792-df1ad7dc11fa","Type":"ContainerStarted","Data":"d9bafc4d0d2ea09f67830ecf3aaef9220530d0aab48f79ff2349255ec9748d6a"} Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.685033 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:03 crc kubenswrapper[4740]: I0216 13:56:03.691743 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-tm9fr"] Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.497694 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.580780 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") pod \"f1f88001-ae96-4762-b792-df1ad7dc11fa\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.580993 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") pod \"f1f88001-ae96-4762-b792-df1ad7dc11fa\" (UID: \"f1f88001-ae96-4762-b792-df1ad7dc11fa\") " Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.581011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host" (OuterVolumeSpecName: "host") pod "f1f88001-ae96-4762-b792-df1ad7dc11fa" (UID: "f1f88001-ae96-4762-b792-df1ad7dc11fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.581439 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1f88001-ae96-4762-b792-df1ad7dc11fa-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.585951 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq" (OuterVolumeSpecName: "kube-api-access-mdflq") pod "f1f88001-ae96-4762-b792-df1ad7dc11fa" (UID: "f1f88001-ae96-4762-b792-df1ad7dc11fa"). InnerVolumeSpecName "kube-api-access-mdflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.683455 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdflq\" (UniqueName: \"kubernetes.io/projected/f1f88001-ae96-4762-b792-df1ad7dc11fa-kube-api-access-mdflq\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.878536 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:04 crc kubenswrapper[4740]: E0216 13:56:04.879399 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.879416 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.879677 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" containerName="container-00" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.880289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.987994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:04 crc kubenswrapper[4740]: I0216 13:56:04.988085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.090335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.107358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"crc-debug-dcksj\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.205027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:05 crc kubenswrapper[4740]: W0216 13:56:05.230202 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fda0424_a309_4a06_a210_ff39f5a0eb25.slice/crio-9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc WatchSource:0}: Error finding container 9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc: Status 404 returned error can't find the container with id 9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.281795 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:05 crc kubenswrapper[4740]: E0216 13:56:05.282213 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.292687 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f88001-ae96-4762-b792-df1ad7dc11fa" path="/var/lib/kubelet/pods/f1f88001-ae96-4762-b792-df1ad7dc11fa/volumes" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.324416 4740 scope.go:117] "RemoveContainer" containerID="1db78d26ae490a4f59c7495143b0b08733218e0ef503db5fffd97f4525e48294" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.324694 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-tm9fr" Feb 16 13:56:05 crc kubenswrapper[4740]: I0216 13:56:05.332665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" event={"ID":"6fda0424-a309-4a06-a210-ff39f5a0eb25","Type":"ContainerStarted","Data":"9cd48cc937325c9e97ddeae47f0df85957bd8937b0f908f11527da4ebd4d17dc"} Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.343694 4740 generic.go:334] "Generic (PLEG): container finished" podID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerID="38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57" exitCode=0 Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.344082 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" event={"ID":"6fda0424-a309-4a06-a210-ff39f5a0eb25","Type":"ContainerDied","Data":"38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57"} Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.396543 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:06 crc kubenswrapper[4740]: I0216 13:56:06.406232 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/crc-debug-dcksj"] Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.447703 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.529932 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") pod \"6fda0424-a309-4a06-a210-ff39f5a0eb25\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530182 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") pod \"6fda0424-a309-4a06-a210-ff39f5a0eb25\" (UID: \"6fda0424-a309-4a06-a210-ff39f5a0eb25\") " Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530307 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host" (OuterVolumeSpecName: "host") pod "6fda0424-a309-4a06-a210-ff39f5a0eb25" (UID: "6fda0424-a309-4a06-a210-ff39f5a0eb25"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.530667 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6fda0424-a309-4a06-a210-ff39f5a0eb25-host\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.536520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz" (OuterVolumeSpecName: "kube-api-access-q6fkz") pod "6fda0424-a309-4a06-a210-ff39f5a0eb25" (UID: "6fda0424-a309-4a06-a210-ff39f5a0eb25"). InnerVolumeSpecName "kube-api-access-q6fkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:07 crc kubenswrapper[4740]: I0216 13:56:07.632950 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6fkz\" (UniqueName: \"kubernetes.io/projected/6fda0424-a309-4a06-a210-ff39f5a0eb25-kube-api-access-q6fkz\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:08 crc kubenswrapper[4740]: I0216 13:56:08.361009 4740 scope.go:117] "RemoveContainer" containerID="38ee60ec1837492becce668ff60101aa5f5ff66a9e497025052b346cbcbc6b57" Feb 16 13:56:08 crc kubenswrapper[4740]: I0216 13:56:08.361101 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/crc-debug-dcksj" Feb 16 13:56:09 crc kubenswrapper[4740]: I0216 13:56:09.300639 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" path="/var/lib/kubelet/pods/6fda0424-a309-4a06-a210-ff39f5a0eb25/volumes" Feb 16 13:56:19 crc kubenswrapper[4740]: I0216 13:56:19.296861 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:19 crc kubenswrapper[4740]: E0216 13:56:19.298545 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:32 crc kubenswrapper[4740]: I0216 13:56:32.282914 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:32 crc kubenswrapper[4740]: E0216 13:56:32.284084 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:33 crc kubenswrapper[4740]: I0216 13:56:33.775751 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api/0.log" Feb 16 13:56:33 crc kubenswrapper[4740]: I0216 13:56:33.969566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbb5f795d-phd88_793c4693-2327-492b-9798-18501804cdf3/barbican-api-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.005475 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.089575 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-758fd9dd8b-46z5m_30b251e5-1979-41ad-ad86-efebb5e6a240/barbican-keystone-listener-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.173093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.230656 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f4698b555-qswqc_c3550143-6df6-42d0-b18a-8b6275eac907/barbican-worker-log/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.384161 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgdr8_8e96214f-a46e-451a-97d9-d448c66826f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.480583 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-central-agent/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.523059 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/ceilometer-notification-agent/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.595603 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/sg-core/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.607464 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dcfe5822-8cae-409c-8224-b1ce2c452e02/proxy-httpd/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.768982 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api/0.log" Feb 16 13:56:34 crc kubenswrapper[4740]: I0216 13:56:34.800710 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcc53865-a327-4f02-a908-f0b97ae1e2c2/cinder-api-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.019505 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/cinder-scheduler/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.026331 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8cc77810-2df3-4a51-8429-326b706d2388/probe/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.064390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g9tcg_3691fefa-c161-4670-bae7-ddde074e2892/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.209914 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ns5vw_928b9f1f-3a42-47e3-b895-756f66452ebf/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.271209 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.437250 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/init/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.478356 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mpx4g_fe15334d-14c1-4670-89fe-3b7d4864b782/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.492170 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-5sfmf_dc46d93a-139d-4125-9763-1093f49419a5/dnsmasq-dns/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.686572 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-httpd/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.710773 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8535644-0ebc-4cc6-bbc5-a5ef02f30685/glance-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.856913 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-log/0.log" Feb 16 13:56:35 crc kubenswrapper[4740]: I0216 13:56:35.868179 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1da7f67c-ce66-4f6b-b760-f2ae017599c0/glance-httpd/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.081122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.213222 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nsxsh_3e117ddc-9ff8-414d-859b-0a16b4846029/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.360802 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-42525_bf3c8754-68ef-4956-a95b-c6751d81b5bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.447865 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56b9fd8c4d-crftf_add1eb0e-dbfc-463a-b676-3e2e2b1f478d/horizon-log/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.682708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cc7d69b6f-dmv77_e68475b5-404f-48fc-a05a-ea18135e837c/keystone-api/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.685276 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_05c7ea6d-5a24-4b21-851c-e7d51fa61a38/kube-state-metrics/0.log" Feb 16 13:56:36 crc kubenswrapper[4740]: I0216 13:56:36.814347 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjh65_2ab3e576-ab98-496c-a189-2e79796f9e98/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.161461 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-api/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.202488 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d8c67b945-9qhdf_2d2e1871-02f7-4ff9-9987-054bf39f4418/neutron-httpd/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.383158 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5tx5w_3a7cecfd-1168-4187-a70c-7b2151ff214f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:37 crc kubenswrapper[4740]: I0216 13:56:37.948190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-log/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.036093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_07256285-a907-4822-80dc-b5f5866d437f/nova-cell0-conductor-conductor/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.150231 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_56ee2c81-2a61-476c-9731-b94363864633/nova-api-api/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.302556 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4465f42a-9c2a-4aa7-9e45-fa28f78cddd7/nova-cell1-conductor-conductor/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.383887 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_94da2ded-002e-4aa6-9828-404bee84c146/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.414111 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lhwdj_58706e85-268c-4ce0-b1e4-82dd86872568/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.661201 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-log/0.log" Feb 16 13:56:38 crc kubenswrapper[4740]: I0216 13:56:38.949720 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.001634 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e3ba9a19-9826-4c43-9907-8cd8f1a4272a/nova-scheduler-scheduler/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.102794 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.168374 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0edd2079-790d-4061-aaf4-4213fe6adc7a/galera/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.314971 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.538208 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/mysql-bootstrap/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.540840 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9b2a3679-b8ef-4221-a9f6-ccd863696aa8/galera/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.701928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4f78f448-6577-48d1-b077-01e42c14758c/openstackclient/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.749704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b4j4m_ad1b2300-a42b-4a99-b186-7661bb410a36/openstack-network-exporter/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.869524 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_722ecd51-0827-457b-8d5c-246a1a57e24a/nova-metadata-metadata/0.log" Feb 16 13:56:39 crc kubenswrapper[4740]: I0216 13:56:39.970276 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.142332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server-init/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.170560 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovs-vswitchd/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.217136 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-crblj_9b2536c4-0b82-4b42-9fe3-20237884d803/ovsdb-server/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.363638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qnt79_04335a5d-7cac-4a47-982c-70cae9db69ff/ovn-controller/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.516412 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zzdbk_d66e0695-3544-4fd0-9d34-42bea96ea9de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.600305 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/openstack-network-exporter/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.702293 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d4f80435-6b1f-45e1-bc0c-ff150bd3b33b/ovn-northd/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.830183 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/openstack-network-exporter/0.log" Feb 16 13:56:40 crc kubenswrapper[4740]: I0216 13:56:40.878775 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0ba53212-5a6f-45cb-9547-cccd4b36aa32/ovsdbserver-nb/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.070584 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/openstack-network-exporter/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.082526 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_daca8d6b-05ed-4888-9833-9076a4256166/ovsdbserver-sb/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.282978 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-api/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.316909 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.414951 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-758758df44-4g6db_983c874c-3b25-49df-82cb-b3dfaf1db7ac/placement-log/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.594885 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/rabbitmq/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.599578 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_05abd29a-2c3c-4129-9afd-859a65e1ef45/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.670965 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.824076 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/setup-container/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.866548 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m9r9g_9fa622a2-4774-4038-b9ec-ec4bc7f57a46/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:41 crc kubenswrapper[4740]: I0216 13:56:41.912020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6ad16000-fb9f-4231-91fe-239907bba675/rabbitmq/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.098873 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4c988_2abfe09c-2736-49b3-b4e5-fb0e30deb510/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.124258 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-k4g8m_1e403d2d-bd7d-4fa6-a2a4-e15f63d2b090/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.325383 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-87s8t_8c5c2438-cfba-41a9-b429-80c9ce563348/ssh-known-hosts-edpm-deployment/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.341645 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-r8mds_981b1e60-57d5-4a6b-8531-3fd31dd46fa5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.634720 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-server/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.720985 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d4b8b747f-tcdvw_fae3001c-021f-4f48-860e-0893978fafaa/proxy-httpd/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.752996 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4rgvg_8a769496-58ca-4540-9dc4-bd8df7e682fc/swift-ring-rebalance/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.867405 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-auditor/0.log" Feb 16 13:56:42 crc kubenswrapper[4740]: I0216 13:56:42.927240 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-reaper/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.002332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.174973 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-auditor/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.242519 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/account-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.319543 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.387888 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.401625 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/container-updater/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.451700 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-auditor/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.537726 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-expirer/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.627487 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-replicator/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.649056 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-server/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.661352 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/object-updater/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.741087 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/rsync/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.813718 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8953d6de-24a5-4645-b270-2bbafe5b17c5/swift-recon-cron/0.log" Feb 16 13:56:43 crc kubenswrapper[4740]: I0216 13:56:43.904082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99lsn_590a1858-7b00-48c8-a2b4-dae7b652ed89/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.087988 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_90aac50c-27a6-4ebd-b207-d3bc439dc1fe/tempest-tests-tempest-tests-runner/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.147853 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4a270185-f419-49b5-aa81-b6d254269d2d/test-operator-logs-container/0.log" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.281801 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:44 crc kubenswrapper[4740]: E0216 13:56:44.282079 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:56:44 crc kubenswrapper[4740]: I0216 13:56:44.289707 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w42sv_5add9653-c644-42d7-bd4d-10ecb8f84a90/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 13:56:54 crc kubenswrapper[4740]: I0216 13:56:54.513226 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_16622824-15d7-4ff1-8eac-85fe5d8da9db/memcached/0.log" Feb 16 13:56:56 crc kubenswrapper[4740]: I0216 13:56:56.281188 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:56:56 crc kubenswrapper[4740]: E0216 13:56:56.281914 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:07 crc kubenswrapper[4740]: I0216 13:57:07.927877 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.087385 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.112699 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.118987 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.341456 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/util/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.343119 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/extract/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.344638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_839821d02b67fa352b5f2f2742cf71374a58067197cd468c715f3fd4e7nl547_7597307b-d3fd-4fa0-b370-a6d08b6a2daa/pull/0.log" Feb 16 13:57:08 crc kubenswrapper[4740]: I0216 13:57:08.857750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-9kqqk_069bdc0e-d9e1-4e93-a6fc-8aa439550dd0/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.379855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-9xbzr_90321508-9bb9-458e-ada0-001c779161c1/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.428670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kk4mh_7f22cc6e-3761-4336-ab1d-74d9fd88432c/manager/0.log" Feb 16 13:57:09 crc kubenswrapper[4740]: I0216 13:57:09.638851 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-nl26x_fdf72675-c282-4f45-ad93-19aa643dcff8/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.012477 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-rpbmb_f0032304-8799-4a85-964f-2017bfd2dbc8/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.163085 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-v28lz_3e8ba6f6-40ab-47f2-bee7-ddbfc30b9b17/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.257705 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-s8wc5_4eba30c7-3dab-4b8f-8a22-2dae642a6ac5/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.442783 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-z2m7j_fce48c02-3aa2-404b-a9a4-7ba789835be0/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.514008 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-44wdn_7f932811-4449-440a-b4c7-4817bfb33dd3/manager/0.log" Feb 16 13:57:10 crc kubenswrapper[4740]: I0216 13:57:10.797140 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7gw4t_a49c1d67-8cf7-4429-ac73-da13d129304d/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.006336 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-7t92r_121ee83b-e7f1-4302-9455-4cc6f53a07a5/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.209644 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-fn4g2_ba6767b2-e03c-4c12-880d-90bd809d9b48/manager/0.log" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.280774 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:11 crc kubenswrapper[4740]: E0216 13:57:11.281055 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:11 crc kubenswrapper[4740]: I0216 13:57:11.539419 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cfv9j7_76134787-0eff-47bd-982e-16c2c4f98f19/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.002098 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f746469c7-kzds7_4c82699a-266c-43ce-acce-32c8aea26c10/operator/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.414042 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qzt4t_7fe65e33-ae2e-4f40-b686-454192d6b538/registry-server/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.673433 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-gclp4_6d65efdf-ffc7-44cd-9dd1-1b4d9be2e2a4/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.717750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pbpdw_00e4da3c-6d3d-459a-86c2-01a4cdb81e51/manager/0.log" Feb 16 13:57:12 crc kubenswrapper[4740]: I0216 13:57:12.863936 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-64xmt_c6400043-1325-4af3-8c79-4b383441668c/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.017019 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qttct_3e6434b1-64ba-481f-b001-8a465254dc0a/operator/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.278586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6865b_519c5b9e-ed4f-4cba-a731-70a22209f642/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.388331 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cnxhk_04f86073-3515-4d62-a02a-c63d06ecdaaa/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.518112 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-58cw4_7666c640-a9f4-4e09-b79c-7fd31116bd79/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.655590 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-pbkbj_001719d5-3a51-4f6b-b316-9e98f53ed575/manager/0.log" Feb 16 13:57:13 crc kubenswrapper[4740]: I0216 13:57:13.829629 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd688d8fc-7shgl_e749615e-a716-4e6e-8830-947b128e4e58/manager/0.log" Feb 16 13:57:15 crc kubenswrapper[4740]: I0216 13:57:15.419184 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jsfjx_d6090007-0c13-4ea2-823c-3d95bb336fd8/manager/0.log" Feb 16 13:57:25 crc kubenswrapper[4740]: I0216 13:57:25.281640 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:25 crc kubenswrapper[4740]: E0216 13:57:25.283744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.127471 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m9krp_2eef055f-7504-4f20-817e-afcd1bb6f996/control-plane-machine-set-operator/0.log" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.760482 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/kube-rbac-proxy/0.log" Feb 16 13:57:34 crc kubenswrapper[4740]: I0216 13:57:34.926717 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jcv2d_643bf47c-570f-4204-adb1-512cd9e914b8/machine-api-operator/0.log" Feb 16 13:57:40 crc kubenswrapper[4740]: I0216 13:57:40.280791 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:40 crc kubenswrapper[4740]: E0216 13:57:40.281633 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.437379 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kflg5_8b35e0e1-44f6-4481-a71e-98e3f8462bb7/cert-manager-controller/0.log" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.774046 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25fnr_a68020b3-17ff-43dc-b17d-0845940c0758/cert-manager-webhook/0.log" Feb 16 13:57:46 crc kubenswrapper[4740]: I0216 13:57:46.787523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hpjbh_beeada69-65c5-434a-af02-8e6b23e13138/cert-manager-cainjector/0.log" Feb 16 13:57:55 crc kubenswrapper[4740]: I0216 13:57:55.281312 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 13:57:55 crc kubenswrapper[4740]: I0216 13:57:55.565169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.413096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nrnvc_edcdba40-6318-4d29-a235-829e94bc8089/nmstate-console-plugin/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.585124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v88gn_3c0ee084-492b-46da-82b3-9c9a8e1715fd/nmstate-handler/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.633483 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/kube-rbac-proxy/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.692844 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-g5mkh_58a2ae40-4e01-43af-907b-7e91246277ea/nmstate-metrics/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.853918 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-76m6k_afdcb81a-db2a-4c04-b73b-30facf2d10af/nmstate-operator/0.log" Feb 16 13:57:59 crc kubenswrapper[4740]: I0216 13:57:59.880976 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-r9sw6_b7ffd056-af44-4007-8de6-cc707902d4c4/nmstate-webhook/0.log" Feb 16 13:58:29 crc kubenswrapper[4740]: I0216 13:58:29.986966 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/kube-rbac-proxy/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.111964 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-kfv4h_e9790ca2-5f44-4c39-a31f-13dc607ab7c4/controller/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.142867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.435092 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.447164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.471563 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.490944 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.690602 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.719938 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.742327 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.742570 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.899304 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-reloader/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.905565 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-frr-files/0.log" Feb 16 13:58:30 crc kubenswrapper[4740]: I0216 13:58:30.911403 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/cp-metrics/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.167730 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/controller/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.414521 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.455015 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr-metrics/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.503343 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/kube-rbac-proxy-frr/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.655939 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/reloader/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.680802 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-spwnh_2e220608-2271-4260-bc94-e4d206c718d4/frr-k8s-webhook-server/0.log" Feb 16 13:58:31 crc kubenswrapper[4740]: I0216 13:58:31.903178 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75b694c59-wkpkw_97f25eec-68aa-4b48-b40a-08ce0599d525/manager/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.053009 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7887f4bfcc-9grrx_4163a038-60ca-4e8e-bf45-028b04101fc9/webhook-server/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.207286 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/kube-rbac-proxy/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.789390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ffcm2_05937f4c-8149-4db8-bb5e-e863ae011d92/speaker/0.log" Feb 16 13:58:32 crc kubenswrapper[4740]: I0216 13:58:32.876104 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-frlcd_28f2676a-f290-4e9d-9622-d8808c6b8192/frr/0.log" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.930460 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:42 crc kubenswrapper[4740]: E0216 13:58:42.931558 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.931575 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.931793 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fda0424-a309-4a06-a210-ff39f5a0eb25" containerName="container-00" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.933502 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:42 crc kubenswrapper[4740]: I0216 13:58:42.947109 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.025589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.127445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.128213 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.128260 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.150371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"certified-operators-6gjpb\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.309746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:43 crc kubenswrapper[4740]: I0216 13:58:43.818859 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:44 crc kubenswrapper[4740]: I0216 13:58:44.018649 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} Feb 16 13:58:44 crc kubenswrapper[4740]: I0216 13:58:44.019047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"a65a57223c4501a1de109d7a544a32c06da41978c8bb203cc209c339ca8971f8"} Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.027643 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" exitCode=0 Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.027737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} Feb 16 13:58:45 crc kubenswrapper[4740]: I0216 13:58:45.029710 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.038057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.168616 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.455750 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.494911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.499462 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.679854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/util/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.686834 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/pull/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.728190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p9nqk_911ccf29-a1bf-402a-b445-df244f1acb70/extract/0.log" Feb 16 13:58:46 crc kubenswrapper[4740]: I0216 13:58:46.844576 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.045670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.048938 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" exitCode=0 Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.049007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.068887 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.082103 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.199002 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.228787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6gjpb_2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.359346 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.565433 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.598528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.616684 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.799638 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-utilities/0.log" Feb 16 13:58:47 crc kubenswrapper[4740]: I0216 13:58:47.819386 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/extract-content/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.065547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerStarted","Data":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.080323 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.087675 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gjpb" podStartSLOduration=3.6568634639999997 podStartE2EDuration="6.087656993s" podCreationTimestamp="2026-02-16 13:58:42 +0000 UTC" firstStartedPulling="2026-02-16 13:58:45.029508855 +0000 UTC m=+3952.405857576" lastFinishedPulling="2026-02-16 13:58:47.460302384 +0000 UTC m=+3954.836651105" observedRunningTime="2026-02-16 13:58:48.086859229 +0000 UTC m=+3955.463207950" watchObservedRunningTime="2026-02-16 13:58:48.087656993 +0000 UTC m=+3955.464005704" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.293093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbn89_60d9eb5f-5eed-4968-beae-0001d2d70d2a/registry-server/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.861039 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.869566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:48 crc kubenswrapper[4740]: I0216 13:58:48.895998 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.157880 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-utilities/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.227689 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/extract-content/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.393647 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.601472 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zzh8l_805f4cce-9373-4649-8daa-e97ab900433f/registry-server/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.632579 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.710115 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.763490 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.801706 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/util/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.863986 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/pull/0.log" Feb 16 13:58:49 crc kubenswrapper[4740]: I0216 13:58:49.904610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecarvhtp_4e36b7f7-a888-4da4-a510-deafe9588b20/extract/0.log" Feb 16 13:58:50 crc kubenswrapper[4740]: I0216 13:58:50.026297 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xsssg_db2dd193-ab4e-4011-988a-d516f2da367e/marketplace-operator/0.log" Feb 16 13:58:50 crc kubenswrapper[4740]: I0216 13:58:50.722007 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.045460 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.110148 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.122660 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.362183 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.433523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.478842 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.482978 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lv7b8_6ecdfb1a-6379-4a42-a4c7-da582898b1f3/registry-server/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.611715 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.660858 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.660903 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.806806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-utilities/0.log" Feb 16 13:58:51 crc kubenswrapper[4740]: I0216 13:58:51.819495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/extract-content/0.log" Feb 16 13:58:52 crc kubenswrapper[4740]: I0216 13:58:52.182372 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7j9d2_b4d0e942-91bf-460d-9465-2633c1436b2c/registry-server/0.log" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.310000 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.310340 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:53 crc kubenswrapper[4740]: I0216 13:58:53.369071 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:54 crc kubenswrapper[4740]: I0216 13:58:54.154732 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:54 crc kubenswrapper[4740]: I0216 13:58:54.223837 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.129621 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6gjpb" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" containerID="cri-o://bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" gracePeriod=2 Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.849678 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.954561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.954949 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955128 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") pod \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\" (UID: \"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282\") " Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities" (OuterVolumeSpecName: "utilities") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.955895 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:56 crc kubenswrapper[4740]: I0216 13:58:56.961097 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84" (OuterVolumeSpecName: "kube-api-access-56r84") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "kube-api-access-56r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.013551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" (UID: "2e5a9c35-3e18-4e2e-a3e7-11184ef8f282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.058258 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.058293 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56r84\" (UniqueName: \"kubernetes.io/projected/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282-kube-api-access-56r84\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138382 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" exitCode=0 Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138439 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gjpb" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138463 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gjpb" event={"ID":"2e5a9c35-3e18-4e2e-a3e7-11184ef8f282","Type":"ContainerDied","Data":"a65a57223c4501a1de109d7a544a32c06da41978c8bb203cc209c339ca8971f8"} Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.138489 4740 scope.go:117] "RemoveContainer" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.158829 4740 scope.go:117] "RemoveContainer" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.175382 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.183122 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6gjpb"] Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.183763 4740 scope.go:117] "RemoveContainer" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225100 4740 scope.go:117] "RemoveContainer" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.225497 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": container with ID starting with bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460 not found: ID does not exist" containerID="bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225534 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460"} err="failed to get container status \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": rpc error: code = NotFound desc = could not find container \"bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460\": container with ID starting with bf8786ff5112721510e54fd39a66ea74688c31a2dff349f31cbcdbbec67bd460 not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.225560 4740 scope.go:117] "RemoveContainer" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.225978 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": container with ID starting with bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e not found: ID does not exist" containerID="bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226013 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e"} err="failed to get container status \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": rpc error: code = NotFound desc = could not find container \"bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e\": container with ID starting with bd07d1f093ca58eed7abc881abde3763e9ce54254d946b10210a70503e88be7e not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226038 4740 scope.go:117] "RemoveContainer" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: E0216 13:58:57.226232 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": container with ID starting with ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574 not found: ID does not exist" containerID="ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.226249 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574"} err="failed to get container status \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": rpc error: code = NotFound desc = could not find container \"ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574\": container with ID starting with ea28d5699459a94d15b99b2712d3e9bdd38a13ac4a0ecba3cdb81cb8e8d12574 not found: ID does not exist" Feb 16 13:58:57 crc kubenswrapper[4740]: I0216 13:58:57.303476 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" path="/var/lib/kubelet/pods/2e5a9c35-3e18-4e2e-a3e7-11184ef8f282/volumes" Feb 16 13:59:26 crc kubenswrapper[4740]: E0216 13:59:26.387272 4740 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:44404->38.102.83.147:36137: write tcp 38.102.83.147:44404->38.102.83.147:36137: write: broken pipe Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.355638 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.360906 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-utilities" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.370962 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-utilities" Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.371090 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371111 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: E0216 13:59:56.371147 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-content" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371157 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="extract-content" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.371893 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5a9c35-3e18-4e2e-a3e7-11184ef8f282" containerName="registry-server" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.373692 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.373875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.469319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.469681 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.470169 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.572869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.598784 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"redhat-operators-hxxtb\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:56 crc kubenswrapper[4740]: I0216 13:59:56.702574 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.187503 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699184 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" exitCode=0 Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699276 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b"} Feb 16 13:59:57 crc kubenswrapper[4740]: I0216 13:59:57.699496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"f872b2600a32c5dc9ffd8d3280c5731606294c2c3a7a9e4f4aa51a1b5cfc5bf7"} Feb 16 13:59:58 crc kubenswrapper[4740]: I0216 13:59:58.711932 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} Feb 16 13:59:59 crc kubenswrapper[4740]: I0216 13:59:59.727134 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" exitCode=0 Feb 16 13:59:59 crc kubenswrapper[4740]: I0216 13:59:59.727216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.208106 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.211360 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.214384 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.217437 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.230336 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.288598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390879 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.390996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.392079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.400223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.411508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"collect-profiles-29520840-shkms\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.542695 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.743795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerStarted","Data":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} Feb 16 14:00:00 crc kubenswrapper[4740]: I0216 14:00:00.772952 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxxtb" podStartSLOduration=2.245311904 podStartE2EDuration="4.772934407s" podCreationTimestamp="2026-02-16 13:59:56 +0000 UTC" firstStartedPulling="2026-02-16 13:59:57.701007949 +0000 UTC m=+4025.077356670" lastFinishedPulling="2026-02-16 14:00:00.228630452 +0000 UTC m=+4027.604979173" observedRunningTime="2026-02-16 14:00:00.771635627 +0000 UTC m=+4028.147984358" watchObservedRunningTime="2026-02-16 14:00:00.772934407 +0000 UTC m=+4028.149283128" Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.026134 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms"] Feb 16 14:00:01 crc kubenswrapper[4740]: W0216 14:00:01.263259 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb02f2d5a_f1ed_477d_b41d_7e1b56eb3b81.slice/crio-36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac WatchSource:0}: Error finding container 36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac: Status 404 returned error can't find the container with id 36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.761476 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerStarted","Data":"0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8"} Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.761940 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerStarted","Data":"36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac"} Feb 16 14:00:01 crc kubenswrapper[4740]: I0216 14:00:01.786405 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" podStartSLOduration=1.786385691 podStartE2EDuration="1.786385691s" podCreationTimestamp="2026-02-16 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:00:01.780340923 +0000 UTC m=+4029.156689654" watchObservedRunningTime="2026-02-16 14:00:01.786385691 +0000 UTC m=+4029.162734412" Feb 16 14:00:02 crc kubenswrapper[4740]: I0216 14:00:02.773565 4740 generic.go:334] "Generic (PLEG): container finished" podID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerID="0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8" exitCode=0 Feb 16 14:00:02 crc kubenswrapper[4740]: I0216 14:00:02.773641 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerDied","Data":"0520dd5b7bcf57cec6d803dfb30fa63d746d3bcde271cc9a1ba8c2c6bf06aba8"} Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.211062 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.388757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.389120 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.389219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") pod \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\" (UID: \"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81\") " Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.391604 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume" (OuterVolumeSpecName: "config-volume") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.397516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.401452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns" (OuterVolumeSpecName: "kube-api-access-mltns") pod "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" (UID: "b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81"). InnerVolumeSpecName "kube-api-access-mltns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492185 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492537 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mltns\" (UniqueName: \"kubernetes.io/projected/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-kube-api-access-mltns\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.492550 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.799943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" event={"ID":"b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81","Type":"ContainerDied","Data":"36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac"} Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.800001 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36becf11ba19c69707c1d7385bc3dbc49077e7966a8cca1eb0779c96eebd33ac" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.800078 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-shkms" Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.864846 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 14:00:04 crc kubenswrapper[4740]: I0216 14:00:04.873383 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520795-jf4h8"] Feb 16 14:00:05 crc kubenswrapper[4740]: I0216 14:00:05.318410 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab47f99f-f805-4d2e-bdf6-6da944e511a5" path="/var/lib/kubelet/pods/ab47f99f-f805-4d2e-bdf6-6da944e511a5/volumes" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.703666 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.704522 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:06 crc kubenswrapper[4740]: I0216 14:00:06.853410 4740 scope.go:117] "RemoveContainer" containerID="abe9c24d5f732811d552e04df67f2330c658e4db7a4f4498f3fb4c1af1df86df" Feb 16 14:00:07 crc kubenswrapper[4740]: I0216 14:00:07.775892 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxxtb" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" probeResult="failure" output=< Feb 16 14:00:07 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Feb 16 14:00:07 crc kubenswrapper[4740]: > Feb 16 14:00:15 crc kubenswrapper[4740]: I0216 14:00:15.575676 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:00:15 crc kubenswrapper[4740]: I0216 14:00:15.578164 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:00:16 crc kubenswrapper[4740]: I0216 14:00:16.762896 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:16 crc kubenswrapper[4740]: I0216 14:00:16.820070 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:17 crc kubenswrapper[4740]: I0216 14:00:17.003395 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:17 crc kubenswrapper[4740]: I0216 14:00:17.940171 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxxtb" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" containerID="cri-o://813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" gracePeriod=2 Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.471027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519182 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.519519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") pod \"919ebe38-6d23-4da9-a367-69340e2f8574\" (UID: \"919ebe38-6d23-4da9-a367-69340e2f8574\") " Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.520167 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities" (OuterVolumeSpecName: "utilities") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.526083 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f" (OuterVolumeSpecName: "kube-api-access-ws46f") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "kube-api-access-ws46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.621606 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.621649 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws46f\" (UniqueName: \"kubernetes.io/projected/919ebe38-6d23-4da9-a367-69340e2f8574-kube-api-access-ws46f\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.653212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "919ebe38-6d23-4da9-a367-69340e2f8574" (UID: "919ebe38-6d23-4da9-a367-69340e2f8574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.722986 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919ebe38-6d23-4da9-a367-69340e2f8574-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951463 4740 generic.go:334] "Generic (PLEG): container finished" podID="919ebe38-6d23-4da9-a367-69340e2f8574" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" exitCode=0 Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.952916 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxxtb" event={"ID":"919ebe38-6d23-4da9-a367-69340e2f8574","Type":"ContainerDied","Data":"f872b2600a32c5dc9ffd8d3280c5731606294c2c3a7a9e4f4aa51a1b5cfc5bf7"} Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.953020 4740 scope.go:117] "RemoveContainer" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.951804 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxxtb" Feb 16 14:00:18 crc kubenswrapper[4740]: I0216 14:00:18.992692 4740 scope.go:117] "RemoveContainer" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.000057 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.011725 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxxtb"] Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.025438 4740 scope.go:117] "RemoveContainer" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058210 4740 scope.go:117] "RemoveContainer" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.058638 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": container with ID starting with 813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507 not found: ID does not exist" containerID="813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058685 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507"} err="failed to get container status \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": rpc error: code = NotFound desc = could not find container \"813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507\": container with ID starting with 813b34fe55e19fcaad0aeb86281b78f2fc242ab4bef0918fe0d4edfd8f2f5507 not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.058712 4740 scope.go:117] "RemoveContainer" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.059108 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": container with ID starting with 3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053 not found: ID does not exist" containerID="3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059146 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053"} err="failed to get container status \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": rpc error: code = NotFound desc = could not find container \"3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053\": container with ID starting with 3deadb5c6db2d33d57ae5d5029cb73816ba45e49e784270ca470341e17ede053 not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059173 4740 scope.go:117] "RemoveContainer" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: E0216 14:00:19.059613 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": container with ID starting with 9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b not found: ID does not exist" containerID="9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.059641 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b"} err="failed to get container status \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": rpc error: code = NotFound desc = could not find container \"9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b\": container with ID starting with 9e860a018ab769a5d4673af92ec435d499d3c0198303a4e76b326de5639aff7b not found: ID does not exist" Feb 16 14:00:19 crc kubenswrapper[4740]: I0216 14:00:19.294364 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" path="/var/lib/kubelet/pods/919ebe38-6d23-4da9-a367-69340e2f8574/volumes" Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.164739 4740 generic.go:334] "Generic (PLEG): container finished" podID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" exitCode=0 Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.164893 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2mt6s/must-gather-88jwg" event={"ID":"edeaee36-29fa-4f01-91d3-e79e65f07117","Type":"ContainerDied","Data":"6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23"} Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.166196 4740 scope.go:117] "RemoveContainer" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" Feb 16 14:00:38 crc kubenswrapper[4740]: I0216 14:00:38.285097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/gather/0.log" Feb 16 14:00:45 crc kubenswrapper[4740]: I0216 14:00:45.577318 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:00:45 crc kubenswrapper[4740]: I0216 14:00:45.578350 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.964924 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.965919 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2mt6s/must-gather-88jwg" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" containerID="cri-o://df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" gracePeriod=2 Feb 16 14:00:48 crc kubenswrapper[4740]: I0216 14:00:48.973034 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2mt6s/must-gather-88jwg"] Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.292675 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.293973 4740 generic.go:334] "Generic (PLEG): container finished" podID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerID="df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" exitCode=143 Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.416286 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.416618 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.500411 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") pod \"edeaee36-29fa-4f01-91d3-e79e65f07117\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.500531 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") pod \"edeaee36-29fa-4f01-91d3-e79e65f07117\" (UID: \"edeaee36-29fa-4f01-91d3-e79e65f07117\") " Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.509712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt" (OuterVolumeSpecName: "kube-api-access-dhgqt") pod "edeaee36-29fa-4f01-91d3-e79e65f07117" (UID: "edeaee36-29fa-4f01-91d3-e79e65f07117"). InnerVolumeSpecName "kube-api-access-dhgqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.603741 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhgqt\" (UniqueName: \"kubernetes.io/projected/edeaee36-29fa-4f01-91d3-e79e65f07117-kube-api-access-dhgqt\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.651979 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "edeaee36-29fa-4f01-91d3-e79e65f07117" (UID: "edeaee36-29fa-4f01-91d3-e79e65f07117"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:00:49 crc kubenswrapper[4740]: I0216 14:00:49.705174 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/edeaee36-29fa-4f01-91d3-e79e65f07117-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.302650 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2mt6s_must-gather-88jwg_edeaee36-29fa-4f01-91d3-e79e65f07117/copy/0.log" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.303080 4740 scope.go:117] "RemoveContainer" containerID="df5c35636e4b389439eb3bb1245f8b4f007c35530262c25d096fddd9822bcd74" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.303115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2mt6s/must-gather-88jwg" Feb 16 14:00:50 crc kubenswrapper[4740]: I0216 14:00:50.325196 4740 scope.go:117] "RemoveContainer" containerID="6de4ab5b79f3cd9ee7fef7919182ed1cd692decc589a11ce5806dc9ce6541d23" Feb 16 14:00:51 crc kubenswrapper[4740]: I0216 14:00:51.293635 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" path="/var/lib/kubelet/pods/edeaee36-29fa-4f01-91d3-e79e65f07117/volumes" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.151124 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152204 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-utilities" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-utilities" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152239 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152251 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152276 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-content" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152285 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="extract-content" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152300 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152316 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152338 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152350 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: E0216 14:01:00.152394 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152406 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152714 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="gather" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152736 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b02f2d5a-f1ed-477d-b41d-7e1b56eb3b81" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152761 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="919ebe38-6d23-4da9-a367-69340e2f8574" containerName="registry-server" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.152787 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="edeaee36-29fa-4f01-91d3-e79e65f07117" containerName="copy" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.153515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.167246 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.206832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207014 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207060 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.207133 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.310978 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.319890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.319943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.320342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.330281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"keystone-cron-29520841-7tm7q\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.508495 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:00 crc kubenswrapper[4740]: I0216 14:01:00.958082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-7tm7q"] Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.423771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerStarted","Data":"bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4"} Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.423989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerStarted","Data":"cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8"} Feb 16 14:01:01 crc kubenswrapper[4740]: I0216 14:01:01.450117 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520841-7tm7q" podStartSLOduration=1.450097477 podStartE2EDuration="1.450097477s" podCreationTimestamp="2026-02-16 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:01:01.439188686 +0000 UTC m=+4088.815537407" watchObservedRunningTime="2026-02-16 14:01:01.450097477 +0000 UTC m=+4088.826446208" Feb 16 14:01:03 crc kubenswrapper[4740]: I0216 14:01:03.449530 4740 generic.go:334] "Generic (PLEG): container finished" podID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerID="bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4" exitCode=0 Feb 16 14:01:03 crc kubenswrapper[4740]: I0216 14:01:03.449635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerDied","Data":"bd544e35de62220cc933766cedce5c0e2eacafd4b4ff0fe8a1e9e88ec1d9c2b4"} Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.850990 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.911885 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912180 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.912267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") pod \"3b1c60db-544d-480d-9234-ee41ad93e3aa\" (UID: \"3b1c60db-544d-480d-9234-ee41ad93e3aa\") " Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.925023 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.925042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj" (OuterVolumeSpecName: "kube-api-access-qdzhj") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "kube-api-access-qdzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.940788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:04 crc kubenswrapper[4740]: I0216 14:01:04.974552 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data" (OuterVolumeSpecName: "config-data") pod "3b1c60db-544d-480d-9234-ee41ad93e3aa" (UID: "3b1c60db-544d-480d-9234-ee41ad93e3aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.013963 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdzhj\" (UniqueName: \"kubernetes.io/projected/3b1c60db-544d-480d-9234-ee41ad93e3aa-kube-api-access-qdzhj\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.013993 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.014006 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.014018 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1c60db-544d-480d-9234-ee41ad93e3aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-7tm7q" event={"ID":"3b1c60db-544d-480d-9234-ee41ad93e3aa","Type":"ContainerDied","Data":"cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8"} Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469502 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb7f5048e1f526bca8580aae38a6068d9f9f89f5e184d8546cc26884d344bb8" Feb 16 14:01:05 crc kubenswrapper[4740]: I0216 14:01:05.469301 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-7tm7q" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.574877 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.575426 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.575466 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.576197 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:01:15 crc kubenswrapper[4740]: I0216 14:01:15.576268 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c" gracePeriod=600 Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.592563 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c" exitCode=0 Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.592673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c"} Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.593312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerStarted","Data":"f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b"} Feb 16 14:01:16 crc kubenswrapper[4740]: I0216 14:01:16.593357 4740 scope.go:117] "RemoveContainer" containerID="e869f22e94c341d3b86ab6b87bf752487b5b4bf55f363ab4043c99a5b915f8e9" Feb 16 14:02:06 crc kubenswrapper[4740]: I0216 14:02:06.979334 4740 scope.go:117] "RemoveContainer" containerID="5967744a58f90558a044d1988dda143828539df3212e40a650ea46f63383d4a1" Feb 16 14:03:15 crc kubenswrapper[4740]: I0216 14:03:15.574927 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:03:15 crc kubenswrapper[4740]: I0216 14:03:15.575636 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.221049 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:33 crc kubenswrapper[4740]: E0216 14:03:33.229191 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.229213 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.229480 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c60db-544d-480d-9234-ee41ad93e3aa" containerName="keystone-cron" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.231376 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.239921 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.240547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.341481 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.342263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.342389 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.364592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"redhat-marketplace-jtmnb\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:33 crc kubenswrapper[4740]: I0216 14:03:33.598121 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.104578 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936250 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerID="c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d" exitCode=0 Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d"} Feb 16 14:03:34 crc kubenswrapper[4740]: I0216 14:03:34.936539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"8ae4cb36b51a0aec1859f8648c6befd7a501edc2ee851f140a2cd236b7a67421"} Feb 16 14:03:35 crc kubenswrapper[4740]: I0216 14:03:35.951288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961481 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerID="ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3" exitCode=0 Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.961803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerStarted","Data":"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0"} Feb 16 14:03:36 crc kubenswrapper[4740]: I0216 14:03:36.983056 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jtmnb" podStartSLOduration=2.565024981 podStartE2EDuration="3.983039706s" podCreationTimestamp="2026-02-16 14:03:33 +0000 UTC" firstStartedPulling="2026-02-16 14:03:34.938332692 +0000 UTC m=+4242.314681423" lastFinishedPulling="2026-02-16 14:03:36.356347397 +0000 UTC m=+4243.732696148" observedRunningTime="2026-02-16 14:03:36.976549563 +0000 UTC m=+4244.352898294" watchObservedRunningTime="2026-02-16 14:03:36.983039706 +0000 UTC m=+4244.359388427" Feb 16 14:03:43 crc kubenswrapper[4740]: I0216 14:03:43.598328 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:43 crc kubenswrapper[4740]: I0216 14:03:43.599921 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:43 crc kubenswrapper[4740]: I0216 14:03:43.653560 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:44 crc kubenswrapper[4740]: I0216 14:03:44.085161 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:44 crc kubenswrapper[4740]: I0216 14:03:44.131577 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:45 crc kubenswrapper[4740]: I0216 14:03:45.575323 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:03:45 crc kubenswrapper[4740]: I0216 14:03:45.575719 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.045791 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jtmnb" podUID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerName="registry-server" containerID="cri-o://6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0" gracePeriod=2 Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.555918 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.704435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") pod \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.704584 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") pod \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.704630 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") pod \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\" (UID: \"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04\") " Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.705879 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities" (OuterVolumeSpecName: "utilities") pod "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" (UID: "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.712281 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv" (OuterVolumeSpecName: "kube-api-access-rwgqv") pod "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" (UID: "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04"). InnerVolumeSpecName "kube-api-access-rwgqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.727149 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" (UID: "f8c43d70-33e9-42cf-8c2b-7e440dd9ab04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.807520 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwgqv\" (UniqueName: \"kubernetes.io/projected/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-kube-api-access-rwgqv\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.807572 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:46 crc kubenswrapper[4740]: I0216 14:03:46.807594 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.061461 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" containerID="6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0" exitCode=0 Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.061883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0"} Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.061928 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtmnb" event={"ID":"f8c43d70-33e9-42cf-8c2b-7e440dd9ab04","Type":"ContainerDied","Data":"8ae4cb36b51a0aec1859f8648c6befd7a501edc2ee851f140a2cd236b7a67421"} Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.061959 4740 scope.go:117] "RemoveContainer" containerID="6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.062195 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtmnb" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.099128 4740 scope.go:117] "RemoveContainer" containerID="ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.129441 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.142679 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtmnb"] Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.142827 4740 scope.go:117] "RemoveContainer" containerID="c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.171326 4740 scope.go:117] "RemoveContainer" containerID="6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0" Feb 16 14:03:47 crc kubenswrapper[4740]: E0216 14:03:47.171775 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0\": container with ID starting with 6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0 not found: ID does not exist" containerID="6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.171871 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0"} err="failed to get container status \"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0\": rpc error: code = NotFound desc = could not find container \"6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0\": container with ID starting with 6462560d1cda7df70ba8b40a77ae2ecd04e37977604000319fb65c81feb0aba0 not found: ID does not exist" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.171913 4740 scope.go:117] "RemoveContainer" containerID="ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3" Feb 16 14:03:47 crc kubenswrapper[4740]: E0216 14:03:47.173378 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3\": container with ID starting with ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3 not found: ID does not exist" containerID="ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.173418 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3"} err="failed to get container status \"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3\": rpc error: code = NotFound desc = could not find container \"ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3\": container with ID starting with ff1c5370b986011e5a48e378d4771a7064f964166ec820593edec608af0f1bf3 not found: ID does not exist" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.173445 4740 scope.go:117] "RemoveContainer" containerID="c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d" Feb 16 14:03:47 crc kubenswrapper[4740]: E0216 14:03:47.173883 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d\": container with ID starting with c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d not found: ID does not exist" containerID="c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.173934 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d"} err="failed to get container status \"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d\": rpc error: code = NotFound desc = could not find container \"c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d\": container with ID starting with c591e6b9a7c0638484a50b9edc0dd8572eb4b3134610a7a853f10c2359c7c47d not found: ID does not exist" Feb 16 14:03:47 crc kubenswrapper[4740]: I0216 14:03:47.295208 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c43d70-33e9-42cf-8c2b-7e440dd9ab04" path="/var/lib/kubelet/pods/f8c43d70-33e9-42cf-8c2b-7e440dd9ab04/volumes" Feb 16 14:04:15 crc kubenswrapper[4740]: I0216 14:04:15.575177 4740 patch_prober.go:28] interesting pod/machine-config-daemon-q4qtj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:04:15 crc kubenswrapper[4740]: I0216 14:04:15.575794 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:04:15 crc kubenswrapper[4740]: I0216 14:04:15.575878 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" Feb 16 14:04:15 crc kubenswrapper[4740]: I0216 14:04:15.576715 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b"} pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:04:15 crc kubenswrapper[4740]: I0216 14:04:15.576776 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerName="machine-config-daemon" containerID="cri-o://f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" gracePeriod=600 Feb 16 14:04:15 crc kubenswrapper[4740]: E0216 14:04:15.709325 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:04:16 crc kubenswrapper[4740]: I0216 14:04:16.358897 4740 generic.go:334] "Generic (PLEG): container finished" podID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" exitCode=0 Feb 16 14:04:16 crc kubenswrapper[4740]: I0216 14:04:16.359208 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" event={"ID":"a46e0708-a1b9-4055-8abc-b3d8de6e5245","Type":"ContainerDied","Data":"f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b"} Feb 16 14:04:16 crc kubenswrapper[4740]: I0216 14:04:16.359257 4740 scope.go:117] "RemoveContainer" containerID="2869e78774c1f00fe4f31559cf3edaa4b9383697775554e132318f0021981a4c" Feb 16 14:04:16 crc kubenswrapper[4740]: I0216 14:04:16.360388 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:04:16 crc kubenswrapper[4740]: E0216 14:04:16.360880 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:04:31 crc kubenswrapper[4740]: I0216 14:04:31.281802 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:04:31 crc kubenswrapper[4740]: E0216 14:04:31.283201 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:04:42 crc kubenswrapper[4740]: I0216 14:04:42.281602 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:04:42 crc kubenswrapper[4740]: E0216 14:04:42.282446 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:04:54 crc kubenswrapper[4740]: I0216 14:04:54.281750 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:04:54 crc kubenswrapper[4740]: E0216 14:04:54.282693 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:05:08 crc kubenswrapper[4740]: I0216 14:05:08.282432 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:05:08 crc kubenswrapper[4740]: E0216 14:05:08.283503 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" Feb 16 14:05:23 crc kubenswrapper[4740]: I0216 14:05:23.287306 4740 scope.go:117] "RemoveContainer" containerID="f5e3e00e65ab16e1ee031b9858d89239d929b9425e54235b251c4adb0a5d2b9b" Feb 16 14:05:23 crc kubenswrapper[4740]: E0216 14:05:23.288304 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q4qtj_openshift-machine-config-operator(a46e0708-a1b9-4055-8abc-b3d8de6e5245)\"" pod="openshift-machine-config-operator/machine-config-daemon-q4qtj" podUID="a46e0708-a1b9-4055-8abc-b3d8de6e5245" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144622060024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144622061017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144611130016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144611130015451 5ustar corecore